- Feb 20, 2014
-
-
Chen Chao authored
url of "Collaborative Filtering for Implicit Feedback Datasets" is invalid now. A new url is provided. http://research.yahoo.com/files/HuKorenVolinsky-ICDM08.pdf Author: Chen Chao <crazyjvm@gmail.com> Closes #619 from CrazyJvm/master and squashes the following commits: a0b54e4 [Chen Chao] change url to IEEE 9e0e9f0 [Chen Chao] correct spell mistale fcfab5d [Chen Chao] wrap line to to fit within 100 chars 590d56e [Chen Chao] url error
-
- Feb 19, 2014
-
-
CodingCat authored
https://spark-project.atlassian.net/browse/SPARK-1105 fix site scala version error Author: CodingCat <zhunansjtu@gmail.com> Closes #618 from CodingCat/doc_version and squashes the following commits: 39bb8aa [CodingCat] more fixes 65bedb0 [CodingCat] fix site scala version error in doc
-
- Feb 18, 2014
-
-
Xiangrui Meng authored
I launched an EC2 cluster without providing a key name and an identity file. The error showed up after two minutes. It would be good to check those options before launch, given the fact that EC2 billing rounds up to hours. JIRA: https://spark-project.atlassian.net/browse/SPARK-1106 Author: Xiangrui Meng <meng@databricks.com> Closes #617 from mengxr/ec2 and squashes the following commits: 2dfb316 [Xiangrui Meng] check key name and identity file before launch a cluster
-
Patrick Wendell authored
This reverts commit d99773d5.
-
CodingCat authored
https://spark-project.atlassian.net/browse/SPARK-1105 fix site scala version error Author: CodingCat <zhunansjtu@gmail.com> Closes #616 from CodingCat/doc_version and squashes the following commits: eafd99a [CodingCat] fix site scala version error in doc
-
NirmalReddy authored
Optimized imports and arranged according to scala style guide @ https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide#SparkCodeStyleGuide-Imports Author: NirmalReddy <nirmal.reddy@imaginea.com> Author: NirmalReddy <nirmal_reddy2000@yahoo.com> Closes #613 from NirmalReddy/opt-imports and squashes the following commits: 578b4f5 [NirmalReddy] imported java.lang.Double as JDouble a2cbcc5 [NirmalReddy] addressed the comments 776d664 [NirmalReddy] Optimized imports in core
-
- Feb 17, 2014
-
-
Aaron Davidson authored
Our usage of fake ClassTags in this manner is probably not healthy, but I'm not sure if there's a better solution available, so I just cleaned up and documented the current one. Author: Aaron Davidson <aaron@databricks.com> Closes #604 from aarondav/master and squashes the following commits: b398e89 [Aaron Davidson] SPARK-1098: Minor cleanup of ClassTag usage in Java API
-
CodingCat authored
https://spark-project.atlassian.net/browse/SPARK-1090 spark-shell should print help information about parameters and should allow user to configure exe memory there is no document about hot to set --cores/-c in spark-shell and also users should be able to set executor memory through command line options In this PR I also check the format of the options passed by the user Author: CodingCat <zhunansjtu@gmail.com> Closes #599 from CodingCat/spark_shell_improve and squashes the following commits: de5aa38 [CodingCat] add parameter to set driver memory 915cbf8 [CodingCat] improvement on spark_shell (help information, configure memory)
-
Andrew Or authored
Author: Andrew Or <andrewor14@gmail.com> Closes #536 from andrewor14/streaming-typos and squashes the following commits: a05faa6 [Andrew Or] Fix broken link and wording bc2e4bc [Andrew Or] Merge github.com:apache/incubator-spark into streaming-typos d5515b4 [Andrew Or] TD's comments 767ef12 [Andrew Or] Fix broken links 8f4c731 [Andrew Or] Fix typos in programming guide
-
Andrew Ash authored
Author: Andrew Ash <andrew@andrewash.com> Closes #608 from ash211/patch-7 and squashes the following commits: bd85f2a [Andrew Ash] Worker registration logging fix
-
- Feb 16, 2014
-
-
Punya Biswal authored
Author: Punya Biswal <pbiswal@palantir.com> Closes #600 from punya/subtractByKey-java and squashes the following commits: e961913 [Punya Biswal] Hide implicit ClassTags from Java API c5d317b [Punya Biswal] Add subtractByKey to the JavaPairRDD wrapper
-
https://spark-project.atlassian.net/browse/SPARK-1052Bijay Bisht authored
Author: Bijay Bisht <bijay.bisht@gmail.com> Closes #568 from bijaybisht/SPARK-1052 and squashes the following commits: da70395 [Bijay Bisht] fix for https://spark-project.atlassian.net/browse/SPARK-1052 - comments incorporated fdb1d94 [Bijay Bisht] fix for https://spark-project.atlassian.net/browse/SPARK-1052 (cherry picked from commit e797c1ab) Signed-off-by:
Aaron Davidson <aaron@databricks.com>
-
CodingCat authored
https://spark-project.atlassian.net/browse/SPARK-1092?jql=project%20%3D%20SPARK print warning information if user set SPARK_MEM to regulate memory usage of executors ---- OUTDATED: Currently, users will usually set SPARK_MEM to control the memory usage of driver programs, (in spark-class) 91 JAVA_OPTS="$OUR_JAVA_OPTS" 92 JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH" 93 JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM" if they didn't set spark.executor.memory, the value in this environment variable will also affect the memory usage of executors, because the following lines in SparkContext privatespark val executorMemory = conf.getOption("spark.executor.memory") .orElse(Option(System.getenv("SPARK_MEM"))) .map(Utils.memoryStringToMb) .getOrElse(512) also since SPARK_MEM has been (proposed to) deprecated in SPARK-929 (https://spark-project.atlassian.net/browse/SPARK-929) and the corresponding PR (https://github.com/apache/incubator-spark/pull/104) we should remove this line Author: CodingCat <zhunansjtu@gmail.com> Closes #602 from CodingCat/clean_spark_mem and squashes the following commits: 302bb28 [CodingCat] print warning information if user use SPARK_MEM to regulate executor memory usage
-
- Feb 14, 2014
-
-
Andrew Ash authored
Author: Andrew Ash <andrew@andrewash.com> Closes #601 from ash211/typo and squashes the following commits: 9cd43ac [Andrew Ash] Change docs references to metrics.properties, not metrics.conf 3813ff1 [Andrew Ash] Typo: mulitcast -> multicast 873bd2f [Andrew Ash] Typo: Standlone -> Standalone
-
- Feb 13, 2014
-
-
Shivaram Venkataraman authored
Update spark_ec2 to use 0.9.0 by default Backports change from branch-0.9 Author: Shivaram Venkataraman <shivaram@eecs.berkeley.edu> Closes #598 and squashes the following commits: f6d3ed0 [Shivaram Venkataraman] Update spark_ec2 to use 0.9.0 by default Backports change from branch-0.9
-
Christian Lundgren authored
The number of disks for the c3 instance types taken from here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes Author: Christian Lundgren <christian.lundgren@gameanalytics.com> Closes #595 from chrisavl/branch-0.9 and squashes the following commits: c8af5f9 [Christian Lundgren] Add c3 instance types to Spark EC2 (cherry picked from commit 19b4bb2b) Signed-off-by:
Patrick Wendell <pwendell@gmail.com>
-
Bijay Bisht authored
#522 got messed after i rewrote the branch hadoop_jar_name. So created a new one. Author: Bijay Bisht <bijay.bisht@gmail.com> Closes #584 from bijaybisht/hadoop_jar_name_on_0.9.0 and squashes the following commits: 1b6fb3c [Bijay Bisht] Ported hadoopClient jar for < 1.0.1 fix (cherry picked from commit 8093de1b) Signed-off-by:
Patrick Wendell <pwendell@gmail.com>
-
Andrew Ash authored
The first line of a git commit message is the line that's used with many git tools as the most concise textual description of that message. The most common use that I see is in the short log, which is a one line per commit log of recent commits. This commit moves the line Merge pull request #%s from %s. Lower into the message to reserve the first line of the resulting commit for the much more important pull request title. http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html Author: Andrew Ash <andrew@andrewash.com> Closes #574 from ash211/gh-pr-merge-title and squashes the following commits: b240823 [Andrew Ash] More merge_message improvements d2986db [Andrew Ash] Keep GitHub pull request title as commit summary
-
Reynold Xin authored
SPARK-1088: Create a script for running tests so we can have version specific testing on Jenkins. @pwendell Author: Reynold Xin <rxin@apache.org> Closes #592 and squashes the following commits: be02359 [Reynold Xin] SPARK-1088: Create a script for running tests so we can have version specific testing on Jenkins.
-
- Feb 12, 2014
-
-
Xiangrui Meng authored
SPARK-1076: [Fix #578] add @transient to some vals I'll try to be more careful next time. Author: Xiangrui Meng <meng@databricks.com> Closes #591 and squashes the following commits: 2b4f044 [Xiangrui Meng] add @transient to prev in ZippedWithIndexRDD add @transient to seed in PartitionwiseSampledRDD
-
Xiangrui Meng authored
SPARK-1076: Convert Int to Long to avoid overflow Patch for PR #578. Author: Xiangrui Meng <meng@databricks.com> Closes #589 and squashes the following commits: 98c435e [Xiangrui Meng] cast Int to Long to avoid Int overflow
-
Xiangrui Meng authored
SPARK-1076: zipWithIndex and zipWithUniqueId to RDD Assign ranks to an ordered or unordered data set is a common operation. This could be done by first counting records in each partition and then assign ranks in parallel. The purpose of assigning ranks to an unordered set is usually to get a unique id for each item, e.g., to map feature names to feature indices. In such cases, the assignment could be done without counting records, saving one spark job. https://spark-project.atlassian.net/browse/SPARK-1076 == update == Because assigning ranks is very similar to Scala's zipWithIndex, I changed the method name to zipWithIndex and put the index in the value field. Author: Xiangrui Meng <meng@databricks.com> Closes #578 and squashes the following commits: 52a05e1 [Xiangrui Meng] changed assignRanks to zipWithIndex changed assignUniqueIds to zipWithUniqueId minor updates 756881c [Xiangrui Meng] simplified RankedRDD by implementing assignUniqueIds separately moved couting iterator size to Utils do not count items in the last partition and skip counting if there is only one partition 630868c [Xiangrui Meng] newline 21b434b [Xiangrui Meng] add assignRanks and assignUniqueIds to RDD
-
Raymond Liu authored
Minor fix for ZooKeeperPersistenceEngine to use configured working dir Author: Raymond Liu <raymond.liu@intel.com> Closes #583 and squashes the following commits: 91b0609 [Raymond Liu] Minor fix for ZooKeeperPersistenceEngine to use configured working dir
-
- Feb 11, 2014
-
-
Holden Karau authored
SPARK-1072 Use binary search when needed in RangePartioner Author: Holden Karau <holden@pigscanfly.ca> Closes #571 and squashes the following commits: f31a2e1 [Holden Karau] Swith to using CollectionsUtils in Partitioner 4c7a0c3 [Holden Karau] Add CollectionsUtil as suggested by aarondav 7099962 [Holden Karau] Add the binary search to only init once 1bef01d [Holden Karau] CR feedback a21e097 [Holden Karau] Use binary search if we have more than 1000 elements inside of RangePartitioner
-
Henry Saputra authored
SPARK-1075 Fix doc in the Spark Streaming custom receiver closing bracket in the class constructor The closing parentheses in the constructor in the first code block example is reversed: diff --git a/docs/streaming-custom-receivers.md b/docs/streaming-custom-receivers.md index 4e27d65..3fb540c 100644 — a/docs/streaming-custom-receivers.md +++ b/docs/streaming-custom-receivers.md @@ -14,7 +14,7 @@ This starts with implementing NetworkReceiver(api/streaming/index.html#org.apa The following is a simple socket text-stream receiver. {% highlight scala %} class SocketTextStreamReceiver(host: String, port: Int( + class SocketTextStreamReceiver(host: String, port: Int) extends NetworkReceiverString { protected lazy val blocksGenerator: BlockGenerator = Author: Henry Saputra <henry@platfora.com> Closes #577 and squashes the following commits: 6508341 [Henry Saputra] SPARK-1075 Fix doc in the Spark Streaming custom receiver.
-
Chen Chao authored
"in the source DStream" rather than "int the source DStream" "flatMap is a one-to-many DStream operation that creates a new DStream by generating multiple new records from each record int the source DStream." Author: Chen Chao <crazyjvm@gmail.com> Closes #579 and squashes the following commits: 4abcae3 [Chen Chao] in the source DStream
-
- Feb 10, 2014
-
-
Patrick Wendell authored
This reverts commit b6d40b78.
-
Prashant Sharma authored
SPARK-1058, Fix Style Errors and Add Scala Style to Spark Build. Pt 2 Continuation of PR #557 With this all scala style errors are fixed across the code base !! The reason for creating a separate PR was to not interrupt an already reviewed and ready to merge PR. Hope this gets reviewed soon and merged too. Author: Prashant Sharma <prashant.s@imaginea.com> Closes #567 and squashes the following commits: 3b1ec30 [Prashant Sharma] scala style fixes
-
- Feb 09, 2014
-
-
Martin Jaggi authored
new MLlib documentation for optimization, regression and classification new documentation with tex formulas, hopefully improving usability and reproducibility of the offered MLlib methods. also did some minor changes in the code for consistency. scala tests pass. this is the rebased branch, i deleted the old PR jira: https://spark-project.atlassian.net/browse/MLLIB-19 Author: Martin Jaggi <m.jaggi@gmail.com> Closes #566 and squashes the following commits: 5f0f31e [Martin Jaggi] line wrap at 100 chars 4e094fb [Martin Jaggi] better description of GradientDescent 1d6965d [Martin Jaggi] remove broken url ea569c3 [Martin Jaggi] telling what updater actually does 964732b [Martin Jaggi] lambda R() in documentation a6c6228 [Martin Jaggi] better comments in SGD code for regression b32224a [Martin Jaggi] new optimization documentation d5dfef7 [Martin Jaggi] new classification and regression documentation b07ead6 [Martin Jaggi] correct scaling for MSE loss ba6158c [Martin Jaggi] use d for the number of features bab2ed2 [Martin Jaggi] renaming LeastSquaresGradient
-
qqsun8819 authored
[SPARK-1038] Add more fields in JsonProtocol and add tests that verify the JSON itself This is a PR for SPARK-1038. Two major changes: 1 add some fields to JsonProtocol which is new and important to standalone-related data structures 2 Use Diff in liftweb.json to verity the stringified Json output for detecting someone mod type T to Option[T] Author: qqsun8819 <jin.oyj@alibaba-inc.com> Closes #551 and squashes the following commits: fdf0b4e [qqsun8819] [SPARK-1038] 1. Change code style for more readable according to rxin review 2. change submitdate hard-coded string to a date object toString for more complexiblity 095a26f [qqsun8819] [SPARK-1038] mod according to review of pwendel, use hard-coded json string for json data validation. Each test use its own json string 0524e41 [qqsun8819] Merge remote-tracking branch 'upstream/master' into json-protocol d203d5c [qqsun8819] [SPARK-1038] Add more fields in JsonProtocol and add tests that verify the JSON itself
-
Patrick Wendell authored
Fixes bug where merges won't close associated pull request. Previously we added "Closes #XX" in the title. Github will sometimes linbreak the title in a way that causes this to not work. This patch instead adds the line in the body. This also makes the commit format more concise for merge commits. We might consider just dropping those in the future. Author: Patrick Wendell <pwendell@gmail.com> Closes #569 and squashes the following commits: 732eba1 [Patrick Wendell] Fixes bug where merges won't close associated pull request.
-
Patrick Wendell authored
SPARK-1058, Fix Style Errors and Add Scala Style to Spark Build. Author: Patrick Wendell <pwendell@gmail.com> Author: Prashant Sharma <scrapcodes@gmail.com> == Merge branch commits == commit 1a8bd1c059b842cb95cc246aaea74a79fec684f4 Author: Prashant Sharma <scrapcodes@gmail.com> Date: Sun Feb 9 17:39:07 2014 +0530 scala style fixes commit f91709887a8e0b608c5c2b282db19b8a44d53a43 Author: Patrick Wendell <pwendell@gmail.com> Date: Fri Jan 24 11:22:53 2014 -0800 Adding scalastyle snapshot
-
CodingCat authored
[SPARK-1060] startJettyServer should explicitly use IP information https://spark-project.atlassian.net/browse/SPARK-1060 In the current implementation, the webserver in Master/Worker is started with val (srv, bPort) = JettyUtils.startJettyServer("0.0.0.0", port, handlers) inside startJettyServer: val server = new Server(currentPort) //here, the Server will take "0.0.0.0" as the hostname, i.e. will always bind to the IP address of the first NIC this can cause wrong IP binding, e.g. if the host has two NICs, N1 and N2, the user specify the SPARK_LOCAL_IP as the N2's IP address, however, when starting the web server, for the reason stated above, it will always bind to the N1's address Author: CodingCat <zhunansjtu@gmail.com> == Merge branch commits == commit 6c6d9a8ccc9ec4590678a3b34cb03df19092029d Author: CodingCat <zhunansjtu@gmail.com> Date: Thu Feb 6 14:53:34 2014 -0500 startJettyServer should explicitly use IP information
-
jyotiska authored
Added example Python code for sort I added an example Python code for sort. Right now, PySpark has limited examples for new people willing to use the project. This example code sorts integers stored in a file. I was able to sort 5 million, 10 million and 25 million integers with this code. Author: jyotiska <jyotiska123@gmail.com> == Merge branch commits == commit 8ad8faf6c8e02ae1cd68565d98524edf165f54df Author: jyotiska <jyotiska123@gmail.com> Date: Sun Feb 9 11:00:41 2014 +0530 Added comments in code on collect() method commit 6f98f1e313f4472a7c2207d36c4f0fbcebc95a8c Author: jyotiska <jyotiska123@gmail.com> Date: Sat Feb 8 13:12:37 2014 +0530 Updated python example code sort.py commit 945e39a5d68daa7e5bab0d96cbd35d7c4b04eafb Author: jyotiska <jyotiska123@gmail.com> Date: Sat Feb 8 12:59:09 2014 +0530 Added example python code for sort
-
Patrick Wendell authored
[WIP] SPARK-1067: Default log4j initialization causes errors for those not using log4j To fix this - we add a check when initializing log4j. Author: Patrick Wendell <pwendell@gmail.com> == Merge branch commits == commit ffdce513877f64b6eed6d36138c3e0003d392889 Author: Patrick Wendell <pwendell@gmail.com> Date: Fri Feb 7 15:22:29 2014 -0800 Logging fix
-
Patrick Wendell authored
SPARK-1066: Add developer scripts to repository. These are some developer scripts I've been maintaining in a separate public repo. This patch adds them to the Spark repository so they can evolve here and are clearly accessible to all committers. I may do some small additional clean-up in this PR, but wanted to put them here in case others want to review. There are a few types of scripts here: 1. A tool to merge pull requests. 2. A script for packaging releases. 3. A script for auditing release candidates. Author: Patrick Wendell <pwendell@gmail.com> == Merge branch commits == commit 5d5d331d01f6fd59c2eb830f652955119b012173 Author: Patrick Wendell <pwendell@gmail.com> Date: Sat Feb 8 22:11:47 2014 -0800 SPARK-1066: Add developer scripts to repository.
-
- Feb 08, 2014
-
-
Mark Hamstra authored
Version number to 1.0.0-SNAPSHOT Since 0.9.0-incubating is done and out the door, we shouldn't be building 0.9.0-incubating-SNAPSHOT anymore. @pwendell Author: Mark Hamstra <markhamstra@gmail.com> == Merge branch commits == commit 1b00a8a7c1a7f251b4bb3774b84b9e64758eaa71 Author: Mark Hamstra <markhamstra@gmail.com> Date: Wed Feb 5 09:30:32 2014 -0800 Version number to 1.0.0-SNAPSHOT
-
Qiuzhuang Lian authored
Kill drivers in postStop() for Worker. JIRA SPARK-1068:https://spark-project.atlassian.net/browse/SPARK-1068 Author: Qiuzhuang Lian <Qiuzhuang.Lian@gmail.com> == Merge branch commits == commit 9c19ce63637eee9369edd235979288d3d9fc9105 Author: Qiuzhuang Lian <Qiuzhuang.Lian@gmail.com> Date: Sat Feb 8 16:07:39 2014 +0800 Kill drivers in postStop() for Worker. JIRA SPARK-1068:https://spark-project.atlassian.net/browse/SPARK-1068
-
Jey Kottalam authored
Make sbt download an atomic operation Modifies the `sbt/sbt` script to gracefully recover when a previous invocation died in the middle of downloading the SBT jar. Author: Jey Kottalam <jey@cs.berkeley.edu> == Merge branch commits == commit 6c600eb434a2f3e7d70b67831aeebde9b5c0f43b Author: Jey Kottalam <jey@cs.berkeley.edu> Date: Fri Jan 17 10:43:54 2014 -0800 Make sbt download an atomic operation
-
Martin Jaggi authored
tex formulas in the documentation using mathjax. and spliting the MLlib documentation by techniques see jira https://spark-project.atlassian.net/browse/MLLIB-19 and https://github.com/shivaram/spark/compare/mathjax Author: Martin Jaggi <m.jaggi@gmail.com> == Merge branch commits == commit 0364bfabbfc347f917216057a20c39b631842481 Author: Martin Jaggi <m.jaggi@gmail.com> Date: Fri Feb 7 03:19:38 2014 +0100 minor polishing, as suggested by @pwendell commit dcd2142c164b2f602bf472bb152ad55bae82d31a Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 18:04:26 2014 +0100 enabling inline latex formulas with $.$ same mathjax configuration as used in math.stackexchange.com sample usage in the linear algebra (SVD) documentation commit bbafafd2b497a5acaa03a140bb9de1fbb7d67ffa Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 17:31:29 2014 +0100 split MLlib documentation by techniques and linked from the main mllib-guide.md site commit d1c5212b93c67436543c2d8ddbbf610fdf0a26eb Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 16:59:43 2014 +0100 enable mathjax formula in the .md documentation files code by @shivaram commit d73948db0d9bc36296054e79fec5b1a657b4eab4 Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 16:57:23 2014 +0100 minor update on how to compile the documentation
-