- Feb 10, 2013
-
-
Matei Zaharia authored
other issues Conflicts: run2.cmd
-
Matei Zaharia authored
Fixed a 404 in 'Tuning Spark' -- missing '.html'
-
Mark Hamstra authored
-
- Feb 09, 2013
-
-
Matei Zaharia authored
add as many fetch requests as we can, subject to maxBytesInFlight
-
Matei Zaharia authored
SPARK-678: Adding an example with an OLAP roll-up
-
Matei Zaharia authored
Add RDD.coalesce, clean up some RDDs, other misc.
-
Stephen Haberman authored
-
Josh Rosen authored
Add commutative requirement for 'reduce' to Python docstring.
-
Stephen Haberman authored
-
Mark Hamstra authored
-
Matei Zaharia authored
Change docs on 'reduce' since the merging of local reduces no longer pre...
-
Stephen Haberman authored
-
Stephen Haberman authored
-
- Feb 08, 2013
-
-
- Feb 07, 2013
-
-
Matei Zaharia authored
SPARK-685 Adding IPYTHON environment variable support for launching pyspark using ...
-
Nick Pentreath authored
-
- Feb 06, 2013
-
-
Mark Hamstra authored
ordering, so the reduce function must also be commutative.
-
- Feb 05, 2013
-
-
Patrick Wendell authored
-
Stephen Haberman authored
Also made sure clearDependencies() was calling super, to ensure the getSplits/getDependencies vars in the RDD base class get cleaned up.
-
Stephen Haberman authored
Also rename r->rdd and remove unneeded extra type info.
-
Stephen Haberman authored
-
Stephen Haberman authored
-
Matei Zaharia authored
Inline mergePair to look more like the narrow dep branch.
-
Matei Zaharia authored
Handle Terminated to avoid endless DeathPactExceptions.
-
Stephen Haberman authored
Conflicts: core/src/main/scala/spark/deploy/worker/Worker.scala
-
Matei Zaharia authored
Increase DriverSuite timeout.
-
Stephen Haberman authored
Credit to Roland Kuhn, Akka's tech lead, for pointing out this various obvious fix, but StandaloneExecutorBackend.preStart's catch block would never (ever) get hit, because all of the operation's in preStart are async. So, the System.exit in the catch block was skipped, and instead Akka was sending Terminated messages which, since we didn't handle, it turned into DeathPactException, which started a postRestart/preStart infinite loop.
-
Stephen Haberman authored
-
Stephen Haberman authored
No functionality changes, I think this is just more consistent given mergePair isn't called multiple times/recursive. Also added a comment to explain the usual case of having two parent RDDs.
-
Imran Rashid authored
-
Matei Zaharia authored
Streaming constructor which takes JavaSparkContext
-
Patrick Wendell authored
It's sometimes helpful to directly pass a JavaSparkContext, and take advantage of the various constructors available for that.
-
- Feb 04, 2013
-
-
Patrick Wendell authored
-
Matei Zaharia authored
-
Matei Zaharia authored
-
- Feb 03, 2013
-
-
Matei Zaharia authored
Fix exit status in PySpark unit tests; fix/optimize PySpark's RDD.take()
-
Josh Rosen authored
-
Matei Zaharia authored
Add spark.executor.memory to differentiate executor memory from spark-shell
-
Matei Zaharia authored
RDDInfo available from SparkContext
-
Matei Zaharia authored
Once we find a split with no block, we don't have to look for more.
-