- Feb 03, 2013
-
-
Josh Rosen authored
-
Josh Rosen authored
-
- Feb 01, 2013
-
-
Matei Zaharia authored
-
Matei Zaharia authored
Reduce the amount of duplicate logging Akka does to stdout.
-
Stephen Haberman authored
Given we have Akka logging go through SLF4j to log4j, we don't need all the extra noise of Akka's stdout logger that is supposedly only used during Akka init time but seems to continue logging lots of noisy network events that we either don't care about or are in the log4j logs anyway. See: http://doc.akka.io/docs/akka/2.0/general/configuration.html # Log level for the very basic logger activated during AkkaApplication startup # Options: ERROR, WARNING, INFO, DEBUG # stdout-loglevel = "WARNING"
-
Matei Zaharia authored
These operations used to wait for all the results to be available in an array on the driver program before merging them. They now merge values incrementally as they arrive.
-
Matei Zaharia authored
-
Matei Zaharia authored
-
Matei Zaharia authored
Add more private declarations.
-
Matei Zaharia authored
Stop BlockManagers metadataCleaner.
-
Matei Zaharia authored
Use spark.local.dir for PySpark temp files (SPARK-580).
-
Josh Rosen authored
-
Matei Zaharia authored
Do not launch JavaGateways on workers (SPARK-674).
-
Josh Rosen authored
The problem was that the gateway was being initialized whenever the pyspark.context module was loaded. The fix uses lazy initialization that occurs only when SparkContext instances are actually constructed. I also made the gateway and jvm variables private. This change results in ~3-4x performance improvement when running the PySpark unit tests.
-
Stephen Haberman authored
-
Matei Zaharia authored
Changed PartitionPruningRDD's split to make sure it returns the correct split index.
-
Matei Zaharia authored
Fix stdout redirection in PySpark.
-
Josh Rosen authored
-
Reynold Xin authored
-
- Jan 31, 2013
-
-
Matei Zaharia authored
SPARK-673: Capture and re-throw Python exceptions
-
Patrick Wendell authored
-
Patrick Wendell authored
-
Matei Zaharia authored
Remove activation of profiles by default
-
Patrick Wendell authored
This patch alters the Python <-> executor protocol to pass on exception data when they occur in user Python code.
-
Reynold Xin authored
-
Reynold Xin authored
split index.
-
Stephen Haberman authored
-
Mikhail Bautin authored
See the discussion at https://github.com/mesos/spark/pull/355 for why default profile activation is a problem.
-
- Jan 30, 2013
-
-
Matei Zaharia authored
Minor improvements to PySpark docs
-
Patrick Wendell authored
Also, adds a line in doc explaining how to use.
-
Patrick Wendell authored
It's nicer if all the commands you need are made explicit.
-
Matei Zaharia authored
Remember ConnectionManagerId used to initiate SendingConnections
-
Matei Zaharia authored
Make ExecutorIDs include SlaveIDs when running Mesos
-
Matei Zaharia authored
Include message and exitStatus if availalbe.
-
Stephen Haberman authored
-
Charles Reiss authored
-
Charles Reiss authored
the Mesos ExecutorID as a Spark ExecutorID.
-
- Jan 29, 2013
-
-
Charles Reiss authored
This prevents ConnectionManager from getting confused if a machine has multiple host names and the one getHostName() finds happens not to be the one that was passed from, e.g., the BlockManagerMaster.
-
Matei Zaharia authored
Conflicts: core/src/main/scala/spark/deploy/master/Master.scala
-
Matei Zaharia authored
Add RDD.toDebugString.
-