- Jan 18, 2014
-
-
Sean Owen authored
-
- Jan 17, 2014
-
-
Patrick Wendell authored
Fixed Window spark shell launch script error. JIRA SPARK-1029:https://spark-project.atlassian.net/browse/SPARK-1029
-
Patrick Wendell authored
Clone records java api
-
- Jan 16, 2014
-
-
Prashant Sharma authored
-
Qiuzhuang Lian authored
JIRA SPARK-1029:https://spark-project.atlassian.net/browse/SPARK-1029
-
Reynold Xin authored
Fail rather than hanging if a task crashes the JVM. Prior to this commit, if a task crashes the JVM, the task (and all other tasks running on that executor) is marked at KILLED rather than FAILED. As a result, the TaskSetManager will retry the task indefinitely rather than failing the job after maxFailures. Eventually, this makes the job hang, because the Standalone Scheduler removes the application after 10 works have failed, and then the app is left in a state where it's disconnected from the master and waiting to reconnect. This commit fixes that problem by marking tasks as FAILED rather than killed when an executor is lost. The downside of this commit is that if task A fails because another task running on the same executor caused the VM to crash, the failure will incorrectly be counted as a failure of task A. This should not be an issue because we typically set maxFailures to 3, and it is unlikely that a task will be co-located with a JVM-crashing task multiple times.
-
Kay Ousterhout authored
-
- Jan 15, 2014
-
-
Reynold Xin authored
Code clean up for mllib * Removed unnecessary parentheses * Removed unused imports * Simplified `filter...size()` to `count ...` * Removed obsoleted parameters' comments
-
Reynold Xin authored
SPARK-1024 Remove "-XX:+UseCompressedStrings" option from tuning guide remove "-XX:+UseCompressedStrings" option from tuning guide since jdk7 no longer supports this.
-
Kay Ousterhout authored
Prior to this commit, if a task crashes the JVM, the task (and all other tasks running on that executor) is marked at KILLED rather than FAILED. As a result, the TaskSetManager will retry the task indefiniteily rather than failing the job after maxFailures. This commit fixes that problem by marking tasks as FAILED rather than killed when an executor is lost. The downside of this commit is that if task A fails because another task running on the same executor caused the VM to crash, the failure will incorrectly be counted as a failure of task A. This should not be an issue because we typically set maxFailures to 3, and it is unlikely that a task will be co-located with a JVM-crashing task multiple times.
-
Patrick Wendell authored
Clarify that Python 2.7 is only needed for MLlib
-
Matei Zaharia authored
-
Patrick Wendell authored
Workers should use working directory as spark home if it's not specified If users don't set SPARK_HOME in their environment file when launching an application, the standalone cluster should default to the spark home of the worker.
-
Patrick Wendell authored
Made some classes private[stremaing] and deprecated a method in JavaStreamingContext. Classes `RawTextHelper`, `RawTextSender` and `RateLimitedOutputStream` are not useful in the streaming API. There are not used by the core functionality and was there as a support classes for an obscure example. One of the classes is RawTextSender has a main function which can be executed using bin/spark-class even if it is made private[streaming]. In future, I will probably completely remove these classes. For the time being, I am just converting them to private[streaming]. Accessing underlying JavaSparkContext in JavaStreamingContext was through `JavaStreamingContext.sc` . This is deprecated and preferred method is `JavaStreamingContext.sparkContext` to keep it consistent with the `StreamingContext.sparkContext`.
-
Tathagata Das authored
-
Patrick Wendell authored
GraphX shouldn't list Spark as provided. I noticed this when building an application against GraphX to audit the released artifacts.
-
Patrick Wendell authored
-
Patrick Wendell authored
-
Patrick Wendell authored
Updated Debian packaging
-
Thomas Graves authored
More yarn code refactor Try to retrive common code in yarn alpha/stable for client and workerRunnable to reduce duplicated codes. By put them into a trait in common dir and extends with them. Same works could be done for the remaining files in alpha/stable , while the remainning files have much more overlapping codes with different API call here and there within functions, and will need much more close review , aslo it might divide functions into too small trifle ones, thus might not deserve to be done in this way. So just make it run for these two files firstly.
-
CrazyJvm authored
remove "-XX:+UseCompressedStrings" option from tuning guide since jdk7 no longer supports this.
-
Reynold Xin authored
Rename VertexID -> VertexId in GraphX
-
Patrick Wendell authored
Fixed the flaky tests by making SparkConf not serializable SparkConf was being serialized with CoGroupedRDD and Aggregator, which somehow caused OptionalJavaException while being deserialized as part of a ShuffleMapTask. SparkConf should not even be serializable (according to conversation with Matei). This change fixes that. @mateiz @pwendell
-
Patrick Wendell authored
Fixed SVDPlusPlusSuite in Maven build. This should go into 0.9.0 also.
-
Tathagata Das authored
-
Tathagata Das authored
Changed SparkConf to not be serializable. And also fixed unit-test log paths in log4j.properties of external modules.
-
Reynold Xin authored
-
Mark Hamstra authored
-
Ankur Dave authored
-
Mark Hamstra authored
-
- Jan 14, 2014
-
-
Reynold Xin authored
Additional edits for clarity in the graphx programming guide. Added an overview of the Graph and GraphOps functions and fixed numerous typos.
-
Reynold Xin authored
Describe caching and uncaching in GraphX programming guide
-
Ankur Dave authored
-
Reynold Xin authored
Don't clone records for text files
-
Reynold Xin authored
Add GraphX dependency to examples/pom.xml
-
Reynold Xin authored
Deprecate rather than remove old combineValuesByKey function
-
Ankur Dave authored
-
Patrick Wendell authored
-
Patrick Wendell authored
-
Reynold Xin authored
API doc update & make Broadcast public In #413 Broadcast was mistakenly made private[spark]. I changed it to public again. Also exposing id in public given the R frontend requires that. Copied some of the documentation from the programming guide to API Doc for Broadcast and Accumulator. This should be cherry picked into branch-0.9 as well for 0.9.0 release.
-