- Feb 10, 2012
-
-
Matei Zaharia authored
-
Matei Zaharia authored
Conflicts: core/src/main/scala/spark/DAGScheduler.scala core/src/main/scala/spark/SimpleShuffleFetcher.scala core/src/main/scala/spark/SparkContext.scala
-
haoyuan authored
-
Matei Zaharia authored
-
Matei Zaharia authored
-
Matei Zaharia authored
-
Matei Zaharia authored
and made DAGScheduler automatically set SparkEnv.
-
- Feb 09, 2012
- Feb 06, 2012
-
-
Matei Zaharia authored
-
Matei Zaharia authored
-
Matei Zaharia authored
-
Matei Zaharia authored
-
haoyuan authored
-
Matei Zaharia authored
-
- Jan 31, 2012
-
-
Matei Zaharia authored
-
Matei Zaharia authored
-
- Jan 30, 2012
-
-
Matei Zaharia authored
Added immutable map registration in kryo serializer
-
- Jan 26, 2012
-
-
Hiral Patel authored
-
- Jan 13, 2012
-
-
Matei Zaharia authored
Made improvements to takeSample. Also changed SparkLocalKMeans to SparkKMeans
-
Matei Zaharia authored
-
Matei Zaharia authored
-
- Jan 09, 2012
-
-
Edison Tung authored
I've fixed the bugs detailed in the diff. One of the bugs was already fixed on the local file (forgot to commit).
-
- Jan 05, 2012
-
-
Matei Zaharia authored
Fixes #105.
-
- Dec 15, 2011
-
-
Matei Zaharia authored
-
- Dec 14, 2011
-
-
Matei Zaharia authored
-
Matei Zaharia authored
-
- Dec 02, 2011
-
-
Matei Zaharia authored
-
- Dec 01, 2011
-
-
Charles Reiss authored
-
Charles Reiss authored
-
Matei Zaharia authored
(you can no longer iterate over a Source multiple times).
-
Edison Tung authored
-
Edison Tung authored
Math.min takes 2 args, not 1. This was not committed earlier for some reason
-
Edison Tung authored
-
- Nov 30, 2011
-
-
Matei Zaharia authored
merge results into rather than requiring a new object allocation for each element merged. Fixes #95.
-
Matei Zaharia authored
-
- Nov 21, 2011
-
-
Edison Tung authored
takeSamples method takes a specified number of samples from the RDD and outputs it in an array.
-
Edison Tung authored
LocalKMeans runs locally with a randomly generated dataset. SparkLocalKMeans takes an input file and runs KMeans on it.
-
- Nov 13, 2011
-
-
Ankur Dave authored
The first time they appear, exceptions are printed in full, including a stack trace. After that, they are printed in abbreviated form. They are periodically reprinted in full; the reprint interval defaults to 5 seconds and is configurable using the property spark.logging.exceptionPrintInterval.
-
Ankur Dave authored
When a task throws an exception, the Spark executor previously just logged it to a local file on the slave and exited. This commit causes Spark to also report the exception back to the driver using a Mesos status update, so the user doesn't have to look through a log file on the slave. Here's what the reporting currently looks like: # ./run spark.examples.ExceptionHandlingTest master@203.0.113.1:5050 [...] 11/10/26 21:04:13 INFO spark.SimpleJob: Lost TID 1 (task 0:1) 11/10/26 21:04:13 INFO spark.SimpleJob: Loss was due to java.lang.Exception: Testing exception handling [...] 11/10/26 21:04:16 INFO spark.SparkContext: Job finished in 5.988547328 s
-