- Aug 26, 2014
-
-
nchammas authored
The Contributing to Spark guide [recommends](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-AutomatedTesting) running tests by calling `./dev/run-tests`. The README should, too. `./sbt/sbt test` does not cover Python tests or style tests. Author: nchammas <nicholas.chammas@gmail.com> Closes #2149 from nchammas/patch-2 and squashes the following commits: 2b3b132 [nchammas] [Docs] Run tests like in contributing guide
-
- Aug 23, 2014
-
-
Kousuke Saruta authored
[SPARK-2963] REGRESSION - The description about how to build for using CLI and Thrift JDBC server is absent in proper document - The most important things I mentioned in #1885 is as follows. * People who build Spark is not always programmer. * If a person who build Spark is not a programmer, he/she won't read programmer's guide before building. So, how to build for using CLI and JDBC server is not only in programmer's guide. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #2080 from sarutak/SPARK-2963 and squashes the following commits: ee07c76 [Kousuke Saruta] Modified regression of the description about building for using Thrift JDBC server and CLI ed53329 [Kousuke Saruta] Modified description and notaton of proper noun 07c59fc [Kousuke Saruta] Added a description about how to build to use HiveServer and CLI for SparkSQL to building-with-maven.md 6e6645a [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2963 c88fa93 [Kousuke Saruta] Added a description about building to use HiveServer and CLI for SparkSQL
-
- Aug 22, 2014
-
-
Reynold Xin authored
-
- Aug 20, 2014
-
-
Patrick Wendell authored
Currently we have a separate profile called hive-thriftserver. I originally suggested this in case users did not want to bundle the thriftserver, but it's ultimately lead to a lot of confusion. Since the thriftserver is only a few classes, I don't see a really good reason to isolate it from the rest of Hive. So let's go ahead and just include it in the same profile to simplify things. This has been suggested in the past by liancheng. Author: Patrick Wendell <pwendell@gmail.com> Closes #2006 from pwendell/hiveserver and squashes the following commits: 742ea40 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into hiveserver 034ad47 [Patrick Wendell] SPARK-3092: Always include the thriftserver when -Phive is enabled.
-
- Aug 13, 2014
-
-
Kousuke Saruta authored
Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #1885 from sarutak/SPARK-2963 and squashes the following commits: ed53329 [Kousuke Saruta] Modified description and notaton of proper noun 07c59fc [Kousuke Saruta] Added a description about how to build to use HiveServer and CLI for SparkSQL to building-with-maven.md 6e6645a [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2963 c88fa93 [Kousuke Saruta] Added a description about building to use HiveServer and CLI for SparkSQL
-
- Jul 15, 2014
-
-
Reynold Xin authored
-
Reynold Xin authored
(cherry picked from commit 401083be9f010f95110a819a49837ecae7d9c4ec) Signed-off-by:
Reynold Xin <rxin@apache.org>
-
- Jul 11, 2014
-
-
Kousuke Saruta authored
Now, we should use -Pyarn instead of SPARK_YARN when building but README says as follows. For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set `SPARK_YARN=true`: # Apache Hadoop 2.0.5-alpha $ sbt/sbt -Dhadoop.version=2.0.5-alpha -Pyarn assembly # Cloudera CDH 4.2.0 with MapReduce v2 $ sbt/sbt -Dhadoop.version=2.0.0-cdh4.2.0 -Pyarn assembly # Apache Hadoop 2.2.X and newer $ sbt/sbt -Dhadoop.version=2.2.0 -Pyarn assembly Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #1382 from sarutak/SPARK-2457 and squashes the following commits: e7b2d64 [Kousuke Saruta] Replaced "SPARK_YARN=true" with "-Pyarn" in README
-
- Jul 10, 2014
-
-
Patrick Wendell authored
-
- May 19, 2014
-
-
Matei Zaharia authored
- Look for JARs in the right place - Launch examples the same way as on Unix - Load datanucleus JARs if they exist - Don't attempt to parse local paths as URIs in SparkSubmit, since paths with C:\ are not valid URIs - Also fixed POM exclusion rules for datanucleus (it wasn't properly excluding it, whereas SBT was) Author: Matei Zaharia <matei@databricks.com> Closes #819 from mateiz/win-fixes and squashes the following commits: d558f96 [Matei Zaharia] Fix comment 228577b [Matei Zaharia] Review comments d3b71c7 [Matei Zaharia] Properly exclude datanucleus files in Maven assembly 144af84 [Matei Zaharia] Update Windows scripts to match latest binary package layout
-
- May 09, 2014
-
-
Patrick Wendell authored
Gives a nicely formatted message to the user when `run-example` is run to tell them to use `spark-submit`. Author: Patrick Wendell <pwendell@gmail.com> Closes #704 from pwendell/examples and squashes the following commits: 1996ee8 [Patrick Wendell] Feedback form Andrew 3eb7803 [Patrick Wendell] Suggestions from TD 2474668 [Patrick Wendell] SPARK-1565 (Addendum): Replace `run-example` with `spark-submit`.
-
- Apr 19, 2014
-
-
Reynold Xin authored
Author: Reynold Xin <rxin@apache.org> Closes #443 from rxin/readme and squashes the following commits: 16853de [Reynold Xin] Updated SBT and Scala instructions. 3ac3ceb [Reynold Xin] README update
-
- Feb 26, 2014
-
-
Reynold Xin authored
Author: Reynold Xin <rxin@apache.org> Closes #1 from rxin/readme and squashes the following commits: b3a77cd [Reynold Xin] Removed reference to incubation in README.md.
-
- Jan 09, 2014
-
-
Ankur Dave authored
-
- Jan 08, 2014
-
-
Prashant Sharma authored
The link does not work otherwise.
-
- Jan 06, 2014
-
-
Holden Karau authored
-
- Jan 04, 2014
-
-
Holden Karau authored
-
Holden Karau authored
-
- Jan 03, 2014
-
-
Patrick Wendell authored
Closes #316
-
Prashant Sharma authored
-
- Jan 02, 2014
-
-
Prashant Sharma authored
-
Prashant Sharma authored
-
Prashant Sharma authored
-
Prashant Sharma authored
-
- Dec 16, 2013
-
-
Patrick Wendell authored
-
- Dec 10, 2013
-
-
Patrick Wendell authored
This is misleading because the build doesn't source that file. IMO it's better to force people to specify build environment variables on the command line always, like we do in every example.
-
- Dec 06, 2013
-
-
Patrick Wendell authored
-
- Nov 11, 2013
-
-
Joey authored
Changing image references to master branch.
-
- Nov 02, 2013
-
-
Reynold Xin authored
-
- Oct 29, 2013
-
-
Joey authored
-
Joey authored
-
Joseph E. Gonzalez authored
-
- Sep 02, 2013
-
-
Matei Zaharia authored
-
- Sep 01, 2013
-
-
Matei Zaharia authored
-
- Aug 31, 2013
-
-
Matei Zaharia authored
-
- Aug 30, 2013
-
-
Reynold Xin authored
-
- Aug 29, 2013
-
-
Matei Zaharia authored
are now needed
-
Matei Zaharia authored
This commit makes Spark invocation saner by using an assembly JAR to find all of Spark's dependencies instead of adding all the JARs in lib_managed. It also packages the examples into an assembly and uses that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script with two better-named scripts: "run-examples" for examples, and "spark-class" for Spark internal classes (e.g. REPL, master, etc). This is also designed to minimize the confusion people have in trying to use "run" to run their own classes; it's not meant to do that, but now at least if they look at it, they can modify run-examples to do a decent job for them. As part of this, Bagel's examples are also now properly moved to the examples package instead of bagel.
-
- Aug 21, 2013
-
-
Jey Kottalam authored
-
Jey Kottalam authored
-