- Jul 17, 2013
-
-
Ubuntu authored
Consistently invoke bash with /usr/bin/env bash in scripts to make code more portable (JIRA Ticket SPARK-817)
-
- Jul 16, 2013
-
-
Matei Zaharia authored
-
- Jun 28, 2013
-
-
Matei Zaharia authored
-
- Jun 25, 2013
-
-
Matei Zaharia authored
The previous version assumed that a CLASSPATH environment variable was set by the "run" script when launching the process that starts the ExecutorRunner, but unfortunately this is not true in tests. Instead, we factor the classpath calculation into an extenral script and call that. NOTE: This includes a Windows version but hasn't yet been tested there.
-
Evan Chan authored
-
- Jun 24, 2013
-
-
Evan Chan authored
-
- Jun 22, 2013
-
-
Matei Zaharia authored
-
- May 16, 2013
-
-
Reynold Xin authored
check for core classes in run. This fixed the problem that core tests depend on whether repl module is compiled or not.
-
- May 11, 2013
-
-
Mridul Muralidharan authored
1) Add support for HADOOP_CONF_DIR (and/or YARN_CONF_DIR - use either) : which is used to specify the client side configuration directory : which needs to be part of the CLASSPATH. 2) Move from var+=".." to var="$var.." : the former does not work on older bash shells unfortunately.
-
- Apr 30, 2013
-
-
Mridul Muralidharan authored
-
- Apr 11, 2013
-
-
Mike authored
Reversed the order of tests to find a scala executable (in the case when SPARK_LAUNCH_WITH_SCALA is defined): instead of checking in the PATH first, and only then (if not found) for SCALA_HOME, now we check for SCALA_HOME first, and only then (if not defined) do we look in the PATH. The advantage is that now if the user has a more recent (non-compatible) version of scala in her PATH, she can use SCALA_HOME to point to the older (compatible) version for use with spark. Suggested by Josh Rosen in this thread: https://groups.google.com/forum/?fromgroups=#!topic/spark-users/NC9JKvP8808
-
- Apr 07, 2013
-
-
Patrick Wendell authored
-
- Apr 03, 2013
-
-
Patrick Wendell authored
See the JIRA for more details. I was only able to test the bash version (don't have Windows) so maybe check over that the syntax is correct there.
-
- Feb 26, 2013
-
-
Matei Zaharia authored
-
- Feb 25, 2013
-
-
Matei Zaharia authored
-
Matei Zaharia authored
-
- Feb 24, 2013
-
-
Tathagata Das authored
-
Christoph Grothaus authored
- we do not need getEnvOrEmpty - Instead of saving SPARK_NONDAEMON_JAVA_OPTS, it would be better to modify the scripts to use a different variable name for the JAVA_OPTS they do eventually use
-
- Feb 20, 2013
-
-
Christoph Grothaus authored
-
- Feb 16, 2013
-
-
haitao.yao authored
-
- Feb 10, 2013
-
-
Matei Zaharia authored
Conflicts: docs/_config.yml
-
- Jan 20, 2013
-
-
Matei Zaharia authored
-
- Jan 18, 2013
-
-
seanm authored
-
- Jan 17, 2013
-
-
Matei Zaharia authored
-
- Jan 08, 2013
-
-
Stephen Haberman authored
-
- Jan 06, 2013
-
-
Tathagata Das authored
-
- Jan 01, 2013
-
-
Josh Rosen authored
-
- Dec 28, 2012
-
-
Josh Rosen authored
- Bundle Py4J binaries, since it's hard to install - Uses Spark's `run` script to launch the Py4J gateway, inheriting the settings in spark-env.sh With these changes, (hopefully) nothing more than running `sbt/sbt package` will be necessary to run PySpark.
-
- Dec 10, 2012
-
-
Matei Zaharia authored
-
- Nov 12, 2012
-
-
Tathagata Das authored
Fixed bugs in RawNetworkInputDStream and in its examples. Made the ReducedWindowedDStream persist RDDs to MEMOERY_SER_ONLY by default. Removed unncessary examples. Added streaming-env.sh.template to add recommended setting for streaming.
-
- Oct 22, 2012
-
-
Thomas Dudziak authored
-
- Oct 13, 2012
-
-
Matei Zaharia authored
-
- Oct 07, 2012
-
-
Matei Zaharia authored
with Mesos being in Maven
-
- Oct 06, 2012
-
-
root authored
-
- Oct 04, 2012
-
-
Matei Zaharia authored
-
- Sep 24, 2012
-
-
Matei Zaharia authored
-
- Sep 10, 2012
-
-
Matei Zaharia authored
-
- Sep 07, 2012
-
-
Matei Zaharia authored
-
- Aug 04, 2012
-
-
Denny authored
-
- Jul 28, 2012
-
-
Matei Zaharia authored
-