- Oct 06, 2013
-
-
Aaron Davidson authored
Docker files drawn mostly from Matt Masse. Some updates from Andre Schumacher.
-
- Oct 05, 2013
-
-
Aaron Davidson authored
-
- Oct 04, 2013
-
-
Aaron Davidson authored
One major change was the use of messages instead of raw functions as the parameter of Akka scheduled timers. Since messages are serialized, unlike raw functions, the behavior is easier to think about and doesn't cause race conditions when exceptions are thrown. Another change is to avoid using global pointers that might change without a lock.
-
- Sep 26, 2013
-
-
Aaron Davidson authored
-
Aaron Davidson authored
This patch implements full distributed fault tolerance for standalone scheduler Masters. There is only one master Leader at a time, which is actively serving scheduling requests. If this Leader crashes, another master will eventually be elected, reconstruct the state from the first Master, and continue serving scheduling requests. Leader election is performed using the ZooKeeper leader election pattern. We try to minimize the use of ZooKeeper and the assumptions about ZooKeeper's behavior, so there is a layer of retries and session monitoring on top of the ZooKeeper client. Master failover follows directly from the single-node Master recovery via the file system (patch 194ba4b8), save that the Master state is stored in ZooKeeper instead. Configuration: By default, no recovery mechanism is enabled (spark.deploy.recoveryMode = NONE). By setting spark.deploy.recoveryMode to ZOOKEEPER and setting spark.deploy.zookeeper.url to an appropriate ZooKeeper URL, ZooKeeper recovery mode is enabled. By setting spark.deploy.recoveryMode to FILESYSTEM and setting spark.deploy.recoveryDirectory to an appropriate directory accessible by the Master, we will keep the behavior of from 194ba4b8. Additionally, places where a Master could be specificied by a spark:// url can now take comma-delimited lists to specify backup masters. Note that this is only used for registration of NEW Workers and application Clients. Once a Worker or Client has registered with the Master Leader, it is "in the system" and will never need to register again. Forthcoming: Documentation, tests (! - only ad hoc testing has been performed so far) I do not intend for this commit to be merged until tests are added, but this patch should still be mostly reviewable until then.
-
Aaron Davidson authored
Implements a basic form of Standalone Scheduler fault recovery. In particular, this allows faults to be manually recovered from by means of restarting the Master process on the same machine. This is the majority of the code necessary for general fault tolerance, which will first elect a leader and then recover the Master state. In order to enable fault recovery, the Master will persist a small amount of state related to the registration of Workers and Applications to disk. If the Master is started and sees that this state is still around, it will enter Recovery mode, during which time it will not schedule any new Executors on Workers (but it does accept the registration of new Clients and Workers). At this point, the Master attempts to reconnect to all Workers and Client applications that were registered at the time of failure. After confirming either the existence or nonexistence of all such nodes (within a certain timeout), the Master will exit Recovery mode and resume normal scheduling.
-
Reynold Xin authored
Bug fix in master build
-
Reynold Xin authored
Improved organization of scheduling packages. This commit does not change any code -- only file organization. Please let me know if there was some masterminded strategy behind the existing organization that I failed to understand! There are two components of this change: (1) Moving files out of the cluster package, and down a level to the scheduling package. These files are all used by the local scheduler in addition to the cluster scheduler(s), so should not be in the cluster package. As a result of this change, none of the files in the local package reference files in the cluster package. (2) Moving the mesos package to within the cluster package. The mesos scheduling code is for a cluster, and represents a specific case of cluster scheduling (the Mesos-related classes often subclass cluster scheduling classes). Thus, the most logical place for it seems to be within the cluster package. The one thing about the scheduling code that seems a little funny to me is the naming of the SchedulerBackends. The StandaloneSchedulerBackend is not just for Standalone mode, but instead is used by Mesos coarse grained mode and Yarn, and the backend that *is* just for Standalone mode is instead called SparkDeploySchedulerBackend. I didn't change this because I wasn't sure if there was a reason for this naming that I'm just not aware of.
-
Reynold Xin authored
EC2 SSH improvements
-
Reynold Xin authored
Add mapPartitionsWithIndex
-
Patrick Wendell authored
-
Reynold Xin authored
some minor fixes to MemoryStore This is a repeat of #5, moved to its own branch in my repo. This makes all updates to on ; it skips on synchronizing the reads where it can get away with it.
-
Patrick Wendell authored
Smarter take/limit implementation.
-
- Sep 25, 2013
-
-
Kay Ousterhout authored
This commit does not change any code -- only file organization. There are two components of this change: (1) Moving files out of the cluster package, and down a level to the scheduling package. These files are all used by the local scheduler in addition to the cluster scheduler(s), so should not be in the cluster package. As a result of this change, none of the files in the local package reference files in the cluster package. (2) Moving the mesos package to within the cluster package. The mesos scheduling code is for a cluster, and represents a specific case of cluster scheduling (the Mesos-related classes often subclass cluster scheduling classes). Thus, the most logical place for it is within the cluster package.
-
- Sep 24, 2013
-
-
Patrick Wendell authored
-
Patrick Wendell authored
-
- Sep 23, 2013
-
-
Holden Karau authored
-
Reynold Xin authored
Fix spacing so java.io.tmpdir doesn't run on with SPARK_JAVA_OPTS
-
Y.CORP.YAHOO.COM\tgraves authored
-
Reynold Xin authored
-
Reynold Xin authored
-
- Sep 22, 2013
-
-
Holden Karau authored
-
Reynold Xin authored
Refactor FairSchedulableBuilder
-
jerryshao authored
-
jerryshao authored
-
jerryshao authored
1. Configuration can be read from classpath if not set explicitly. 2. Add missing close handler.
-
Reynold Xin authored
Fix PR926 local properties issues in Spark Streaming like scenarios
-
Reynold Xin authored
Add "org.apache." prefix to packages in spark-class
-
Reynold Xin authored
After unit tests, clear port properties unconditionally
-
- Sep 21, 2013
-
-
jerryshao authored
-
- Sep 20, 2013
-
-
Aaron Davidson authored
Lacking this, the if/case statements never trigger on Spark 0.8.0+.
-
Reynold Xin authored
-
Reynold Xin authored
-
Mike authored
Make "currentMemory" @volatile, so that it's reads in ensureFreeSpace() are atomic and up-to-date--i.e., currentMemory can't increase while putLock is held (though it could decrease, which would only help ensureFreeSpace()).
-
Ankur Dave authored
In MapOutputTrackerSuite, the "remote fetch" test sets spark.driver.port and spark.hostPort, assuming that they will be cleared by LocalSparkContext. However, the test never sets sc, so it remains null, causing LocalSparkContext to skip clearing these properties. Subsequent tests therefore fail with java.net.BindException: "Address already in use". This commit makes LocalSparkContext clear the properties even if sc is null.
-
- Sep 19, 2013
-
-
Patrick Wendell authored
Fix issue with spark_ec2 seeing empty security groups
-
Aaron Davidson authored
Under unknown, but occasional, circumstances, reservation.groups is empty despite reservation.instances each having groups. This means that the spark_ec2 get_existing_clusters() method would fail to find any instances. To fix it, we simply use the instances' groups as the source of truth. Note that this is actually just a revival of PR #827, now that the issue has been reproduced.
-
- Sep 18, 2013
-
-
jerryshao authored
-
- Sep 16, 2013
-
-
Reynold Xin authored
-