Skip to content
Snippets Groups Projects
  1. Feb 20, 2014
    • Ahir Reddy's avatar
      SPARK-1114: Allow PySpark to use existing JVM and Gateway · 59b13795
      Ahir Reddy authored
      Patch to allow PySpark to use existing JVM and Gateway. Changes to PySpark implementation of SparkConf to take existing SparkConf JVM handle. Change to PySpark SparkContext to allow subclass specific context initialization.
      
      Author: Ahir Reddy <ahirreddy@gmail.com>
      
      Closes #622 from ahirreddy/pyspark-existing-jvm and squashes the following commits:
      
      a86f457 [Ahir Reddy] Patch to allow PySpark to use existing JVM and Gateway. Changes to PySpark implementation of SparkConf to take existing SparkConf JVM handle. Change to PySpark SparkContext to allow subclass specific context initialization.
      59b13795
    • Aaron Davidson's avatar
      Super minor: Add require for mergeCombiners in combineByKey · 3fede483
      Aaron Davidson authored
      We changed the behavior in 0.9.0 from requiring that mergeCombiners be null when mapSideCombine was false to requiring that mergeCombiners *never* be null, for external sorting. This patch adds a require() to make this behavior change explicitly messaged rather than resulting in a NPE.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #623 from aarondav/master and squashes the following commits:
      
      520b80c [Aaron Davidson] Super minor: Add require for mergeCombiners in combineByKey
      3fede483
    • Sean Owen's avatar
      MLLIB-22. Support negative implicit input in ALS · 9e63f80e
      Sean Owen authored
      I'm back with another less trivial suggestion for ALS:
      
      In ALS for implicit feedback, input values are treated as weights on squared-errors in a loss function (or rather, the weight is a simple function of the input r, like c = 1 + alpha*r). The paper on which it's based assumes that the input is positive. Indeed, if the input is negative, it will create a negative weight on squared-errors, which causes things to go haywire. The optimization will try to make the error in a cell as large possible, and the result is silently bogus.
      
      There is a good use case for negative input values though. Implicit feedback is usually collected from signals of positive interaction like a view or like or buy, but equally, can come from "not interested" signals. The natural representation is negative values.
      
      The algorithm can be extended quite simply to provide a sound interpretation of these values: negative values should encourage the factorization to come up with 0 for cells with large negative input values, just as much as positive values encourage it to come up with 1.
      
      The implications for the algorithm are simple:
      * the confidence function value must not be negative, and so can become 1 + alpha*|r|
      * the matrix P should have a value 1 where the input R is _positive_, not merely where it is non-zero. Actually, that's what the paper already says, it's just that we can't assume P = 1 when a cell in R is specified anymore, since it may be negative
      
      This in turn entails just a few lines of code change in `ALS.scala`:
      * `rs(i)` becomes `abs(rs(i))`
      * When constructing `userXy(us(i))`, it's implicitly only adding where P is 1. That had been true for any us(i) that is iterated over, before, since these are exactly the ones for which P is 1. But now P is zero where rs(i) <= 0, and should not be added
      
      I think it's a safe change because:
      * It doesn't change any existing behavior (unless you're using negative values, in which case results are already borked)
      * It's the simplest direct extension of the paper's algorithm
      * (I've used it to good effect in production FWIW)
      
      Tests included.
      
      I tweaked minor things en route:
      * `ALS.scala` javadoc writes "R = Xt*Y" when the paper and rest of code defines it as "R = X*Yt"
      * RMSE in the ALS tests uses a confidence-weighted mean, but the denominator is not actually sum of weights
      
      Excuse my Scala style; I'm sure it needs tweaks.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #500 from srowen/ALSNegativeImplicitInput and squashes the following commits:
      
      cf902a9 [Sean Owen] Support negative implicit input in ALS
      953be1c [Sean Owen] Make weighted RMSE in ALS test actually weighted; adjust comment about R = X*Yt
      9e63f80e
    • Chen Chao's avatar
      MLLIB-24: url of "Collaborative Filtering for Implicit Feedback Datasets" in ALS is invalid now · f9b7d64a
      Chen Chao authored
      url of "Collaborative Filtering for Implicit Feedback Datasets"  is invalid now. A new url is provided. http://research.yahoo.com/files/HuKorenVolinsky-ICDM08.pdf
      
      Author: Chen Chao <crazyjvm@gmail.com>
      
      Closes #619 from CrazyJvm/master and squashes the following commits:
      
      a0b54e4 [Chen Chao] change url to IEEE
      9e0e9f0 [Chen Chao] correct spell mistale
      fcfab5d [Chen Chao] wrap line to to fit within 100 chars
      590d56e [Chen Chao] url error
      f9b7d64a
  2. Feb 19, 2014
  3. Feb 18, 2014
  4. Feb 17, 2014
    • Aaron Davidson's avatar
      SPARK-1098: Minor cleanup of ClassTag usage in Java API · f74ae0eb
      Aaron Davidson authored
      Our usage of fake ClassTags in this manner is probably not healthy, but I'm not sure if there's a better solution available, so I just cleaned up and documented the current one.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #604 from aarondav/master and squashes the following commits:
      
      b398e89 [Aaron Davidson] SPARK-1098: Minor cleanup of ClassTag usage in Java API
      f74ae0eb
    • CodingCat's avatar
      [SPARK-1090] improvement on spark_shell (help information, configure memory) · e0d49ad2
      CodingCat authored
      https://spark-project.atlassian.net/browse/SPARK-1090
      
      spark-shell should print help information about parameters and should allow user to configure exe memory
      there is no document about hot to set --cores/-c in spark-shell
      
      and also
      
      users should be able to set executor memory through command line options
      
      In this PR I also check the format of the options passed by the user
      
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #599 from CodingCat/spark_shell_improve and squashes the following commits:
      
      de5aa38 [CodingCat] add parameter to set driver memory
      915cbf8 [CodingCat] improvement on spark_shell (help information, configure memory)
      e0d49ad2
    • Andrew Or's avatar
      Fix typos in Spark Streaming programming guide · 767e3ae1
      Andrew Or authored
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #536 from andrewor14/streaming-typos and squashes the following commits:
      
      a05faa6 [Andrew Or] Fix broken link and wording
      bc2e4bc [Andrew Or] Merge github.com:apache/incubator-spark into streaming-typos
      d5515b4 [Andrew Or] TD's comments
      767ef12 [Andrew Or] Fix broken links
      8f4c731 [Andrew Or] Fix typos in programming guide
      767e3ae1
    • Andrew Ash's avatar
      Worker registration logging fix · c0795cf4
      Andrew Ash authored
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #608 from ash211/patch-7 and squashes the following commits:
      
      bd85f2a [Andrew Ash] Worker registration logging fix
      c0795cf4
  5. Feb 16, 2014
  6. Feb 14, 2014
    • Andrew Ash's avatar
      Typo: Standlone -> Standalone · eec4bd1a
      Andrew Ash authored
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #601 from ash211/typo and squashes the following commits:
      
      9cd43ac [Andrew Ash] Change docs references to metrics.properties, not metrics.conf
      3813ff1 [Andrew Ash] Typo: mulitcast -> multicast
      873bd2f [Andrew Ash] Typo: Standlone -> Standalone
      eec4bd1a
  7. Feb 13, 2014
  8. Feb 12, 2014
    • Xiangrui Meng's avatar
      Merge pull request #591 from mengxr/transient-new. · 7e29e027
      Xiangrui Meng authored
      SPARK-1076: [Fix #578] add @transient to some vals
      
      I'll try to be more careful next time.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #591 and squashes the following commits:
      
      2b4f044 [Xiangrui Meng] add @transient to prev in ZippedWithIndexRDD add @transient to seed in PartitionwiseSampledRDD
      7e29e027
    • Xiangrui Meng's avatar
      Merge pull request #589 from mengxr/index. · 2bea0709
      Xiangrui Meng authored
      SPARK-1076: Convert Int to Long to avoid overflow
      
      Patch for PR #578.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #589 and squashes the following commits:
      
      98c435e [Xiangrui Meng] cast Int to Long to avoid Int overflow
      2bea0709
    • Xiangrui Meng's avatar
      Merge pull request #578 from mengxr/rank. · e733d655
      Xiangrui Meng authored
      SPARK-1076: zipWithIndex and zipWithUniqueId to RDD
      
      Assign ranks to an ordered or unordered data set is a common operation. This could be done by first counting records in each partition and then assign ranks in parallel.
      
      The purpose of assigning ranks to an unordered set is usually to get a unique id for each item, e.g., to map feature names to feature indices. In such cases, the assignment could be done without counting records, saving one spark job.
      
      https://spark-project.atlassian.net/browse/SPARK-1076
      
      == update ==
      Because assigning ranks is very similar to Scala's zipWithIndex, I changed the method name to zipWithIndex and put the index in the value field.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #578 and squashes the following commits:
      
      52a05e1 [Xiangrui Meng] changed assignRanks to zipWithIndex changed assignUniqueIds to zipWithUniqueId minor updates
      756881c [Xiangrui Meng] simplified RankedRDD by implementing assignUniqueIds separately moved couting iterator size to Utils do not count items in the last partition and skip counting if there is only one partition
      630868c [Xiangrui Meng] newline
      21b434b [Xiangrui Meng] add assignRanks and assignUniqueIds to RDD
      e733d655
    • Raymond Liu's avatar
      Merge pull request #583 from colorant/zookeeper. · 68b2c0d0
      Raymond Liu authored
      Minor fix for ZooKeeperPersistenceEngine to use configured working dir
      
      Author: Raymond Liu <raymond.liu@intel.com>
      
      Closes #583 and squashes the following commits:
      
      91b0609 [Raymond Liu] Minor fix for ZooKeeperPersistenceEngine to use configured working dir
      68b2c0d0
  9. Feb 11, 2014
    • Holden Karau's avatar
      Merge pull request #571 from holdenk/switchtobinarysearch. · b0dab1bb
      Holden Karau authored
      SPARK-1072 Use binary search when needed in RangePartioner
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #571 and squashes the following commits:
      
      f31a2e1 [Holden Karau] Swith to using CollectionsUtils in Partitioner
      4c7a0c3 [Holden Karau] Add CollectionsUtil as suggested by aarondav
      7099962 [Holden Karau] Add the binary search to only init once
      1bef01d [Holden Karau] CR feedback
      a21e097 [Holden Karau] Use binary search if we have more than 1000 elements inside of RangePartitioner
      b0dab1bb
    • Henry Saputra's avatar
      Merge pull request #577 from hsaputra/fix_simple_streaming_doc. · ba38d989
      Henry Saputra authored
      SPARK-1075 Fix doc in the Spark Streaming custom receiver closing bracket in the class constructor
      
      The closing parentheses in the constructor in the first code block example is reversed:
      diff --git a/docs/streaming-custom-receivers.md b/docs/streaming-custom-receivers.md
      index 4e27d65..3fb540c 100644
      — a/docs/streaming-custom-receivers.md
      +++ b/docs/streaming-custom-receivers.md
      @@ -14,7 +14,7 @@ This starts with implementing NetworkReceiver(api/streaming/index.html#org.apa
      The following is a simple socket text-stream receiver.
      {% highlight scala %}
      class SocketTextStreamReceiver(host: String, port: Int(
      + class SocketTextStreamReceiver(host: String, port: Int)
      extends NetworkReceiverString
      {
      protected lazy val blocksGenerator: BlockGenerator =
      
      Author: Henry Saputra <henry@platfora.com>
      
      Closes #577 and squashes the following commits:
      
      6508341 [Henry Saputra] SPARK-1075 Fix doc in the Spark Streaming custom receiver.
      ba38d989
    • Chen Chao's avatar
      Merge pull request #579 from CrazyJvm/patch-1. · 4afe6ccf
      Chen Chao authored
      "in the source DStream" rather than "int the source DStream"
      
      "flatMap is a one-to-many DStream operation that creates a new DStream by generating multiple new records from each record int the source DStream."
      
      Author: Chen Chao <crazyjvm@gmail.com>
      
      Closes #579 and squashes the following commits:
      
      4abcae3 [Chen Chao] in the source DStream
      4afe6ccf
  10. Feb 10, 2014
  11. Feb 09, 2014
    • Martin Jaggi's avatar
      Merge pull request #566 from martinjaggi/copy-MLlib-d. · 2182aa3c
      Martin Jaggi authored
      new MLlib documentation for optimization, regression and classification
      
      new documentation with tex formulas, hopefully improving usability and reproducibility of the offered MLlib methods.
      also did some minor changes in the code for consistency. scala tests pass.
      
      this is the rebased branch, i deleted the old PR
      
      jira:
      https://spark-project.atlassian.net/browse/MLLIB-19
      
      Author: Martin Jaggi <m.jaggi@gmail.com>
      
      Closes #566 and squashes the following commits:
      
      5f0f31e [Martin Jaggi] line wrap at 100 chars
      4e094fb [Martin Jaggi] better description of GradientDescent
      1d6965d [Martin Jaggi] remove broken url
      ea569c3 [Martin Jaggi] telling what updater actually does
      964732b [Martin Jaggi] lambda R() in documentation
      a6c6228 [Martin Jaggi] better comments in SGD code for regression
      b32224a [Martin Jaggi] new optimization documentation
      d5dfef7 [Martin Jaggi] new classification and regression documentation
      b07ead6 [Martin Jaggi] correct scaling for MSE loss
      ba6158c [Martin Jaggi] use d for the number of features
      bab2ed2 [Martin Jaggi] renaming LeastSquaresGradient
      2182aa3c
    • qqsun8819's avatar
      Merge pull request #551 from qqsun8819/json-protocol. · afc8f3cb
      qqsun8819 authored
      [SPARK-1038] Add more fields in JsonProtocol and add tests that verify the JSON itself
      
      This is a PR for SPARK-1038. Two major changes:
      1 add some fields to JsonProtocol which is new and important to standalone-related data structures
      2 Use Diff in liftweb.json to verity the stringified Json output for detecting someone mod type T to Option[T]
      
      Author: qqsun8819 <jin.oyj@alibaba-inc.com>
      
      Closes #551 and squashes the following commits:
      
      fdf0b4e [qqsun8819] [SPARK-1038] 1. Change code style for more readable according to rxin review 2. change submitdate hard-coded string to a date object toString for more complexiblity
      095a26f [qqsun8819] [SPARK-1038] mod according to  review of pwendel, use hard-coded json string for json data validation. Each test use its own json string
      0524e41 [qqsun8819] Merge remote-tracking branch 'upstream/master' into json-protocol
      d203d5c [qqsun8819] [SPARK-1038] Add more fields in JsonProtocol and add tests that verify the JSON itself
      afc8f3cb
    • Patrick Wendell's avatar
      Merge pull request #569 from pwendell/merge-fixes. · 94ccf869
      Patrick Wendell authored
      Fixes bug where merges won't close associated pull request.
      
      Previously we added "Closes #XX" in the title. Github will sometimes
      linbreak the title in a way that causes this to not work. This patch
      instead adds the line in the body.
      
      This also makes the commit format more concise for merge commits.
      We might consider just dropping those in the future.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #569 and squashes the following commits:
      
      732eba1 [Patrick Wendell] Fixes bug where merges won't close associated pull request.
      94ccf869
    • Patrick Wendell's avatar
      Merge pull request #557 from ScrapCodes/style. Closes #557. · b69f8b2a
      Patrick Wendell authored
      SPARK-1058, Fix Style Errors and Add Scala Style to Spark Build.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      Author: Prashant Sharma <scrapcodes@gmail.com>
      
      == Merge branch commits ==
      
      commit 1a8bd1c059b842cb95cc246aaea74a79fec684f4
      Author: Prashant Sharma <scrapcodes@gmail.com>
      Date:   Sun Feb 9 17:39:07 2014 +0530
      
          scala style fixes
      
      commit f91709887a8e0b608c5c2b282db19b8a44d53a43
      Author: Patrick Wendell <pwendell@gmail.com>
      Date:   Fri Jan 24 11:22:53 2014 -0800
      
          Adding scalastyle snapshot
      b69f8b2a
    • CodingCat's avatar
      Merge pull request #556 from CodingCat/JettyUtil. Closes #556. · b6dba10a
      CodingCat authored
      [SPARK-1060] startJettyServer should explicitly use IP information
      
      https://spark-project.atlassian.net/browse/SPARK-1060
      
      In the current implementation, the webserver in Master/Worker is started with
      
      val (srv, bPort) = JettyUtils.startJettyServer("0.0.0.0", port, handlers)
      
      inside startJettyServer:
      
      val server = new Server(currentPort) //here, the Server will take "0.0.0.0" as the hostname, i.e. will always bind to the IP address of the first NIC
      
      this can cause wrong IP binding, e.g. if the host has two NICs, N1 and N2, the user specify the SPARK_LOCAL_IP as the N2's IP address, however, when starting the web server, for the reason stated above, it will always bind to the N1's address
      
      Author: CodingCat <zhunansjtu@gmail.com>
      
      == Merge branch commits ==
      
      commit 6c6d9a8ccc9ec4590678a3b34cb03df19092029d
      Author: CodingCat <zhunansjtu@gmail.com>
      Date:   Thu Feb 6 14:53:34 2014 -0500
      
          startJettyServer should explicitly use IP information
      b6dba10a
    • jyotiska's avatar
      Merge pull request #562 from jyotiska/master. Closes #562. · 2ef37c93
      jyotiska authored
      Added example Python code for sort
      
      I added an example Python code for sort. Right now, PySpark has limited examples for new people willing to use the project. This example code sorts integers stored in a file. I was able to sort 5 million, 10 million and 25 million integers with this code.
      
      Author: jyotiska <jyotiska123@gmail.com>
      
      == Merge branch commits ==
      
      commit 8ad8faf6c8e02ae1cd68565d98524edf165f54df
      Author: jyotiska <jyotiska123@gmail.com>
      Date:   Sun Feb 9 11:00:41 2014 +0530
      
          Added comments in code on collect() method
      
      commit 6f98f1e313f4472a7c2207d36c4f0fbcebc95a8c
      Author: jyotiska <jyotiska123@gmail.com>
      Date:   Sat Feb 8 13:12:37 2014 +0530
      
          Updated python example code sort.py
      
      commit 945e39a5d68daa7e5bab0d96cbd35d7c4b04eafb
      Author: jyotiska <jyotiska123@gmail.com>
      Date:   Sat Feb 8 12:59:09 2014 +0530
      
          Added example python code for sort
      2ef37c93
    • Patrick Wendell's avatar
      Merge pull request #560 from pwendell/logging. Closes #560. · b6d40b78
      Patrick Wendell authored
      [WIP] SPARK-1067: Default log4j initialization causes errors for those not using log4j
      
      To fix this - we add a check when initializing log4j.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      == Merge branch commits ==
      
      commit ffdce513877f64b6eed6d36138c3e0003d392889
      Author: Patrick Wendell <pwendell@gmail.com>
      Date:   Fri Feb 7 15:22:29 2014 -0800
      
          Logging fix
      b6d40b78
    • Patrick Wendell's avatar
      Merge pull request #565 from pwendell/dev-scripts. Closes #565. · f892da87
      Patrick Wendell authored
      SPARK-1066: Add developer scripts to repository.
      
      These are some developer scripts I've been maintaining in a separate public repo. This patch adds them to the Spark repository so they can evolve here and are clearly accessible to all committers.
      
      I may do some small additional clean-up in this PR, but wanted to put them here in case others want to review. There are a few types of scripts here:
      
      1. A tool to merge pull requests.
      2. A script for packaging releases.
      3. A script for auditing release candidates.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      == Merge branch commits ==
      
      commit 5d5d331d01f6fd59c2eb830f652955119b012173
      Author: Patrick Wendell <pwendell@gmail.com>
      Date:   Sat Feb 8 22:11:47 2014 -0800
      
          SPARK-1066: Add developer scripts to repository.
      f892da87
  12. Feb 08, 2014
    • Mark Hamstra's avatar
      Merge pull request #542 from markhamstra/versionBump. Closes #542. · c2341c92
      Mark Hamstra authored
      Version number to 1.0.0-SNAPSHOT
      
      Since 0.9.0-incubating is done and out the door, we shouldn't be building 0.9.0-incubating-SNAPSHOT anymore.
      
      @pwendell
      
      Author: Mark Hamstra <markhamstra@gmail.com>
      
      == Merge branch commits ==
      
      commit 1b00a8a7c1a7f251b4bb3774b84b9e64758eaa71
      Author: Mark Hamstra <markhamstra@gmail.com>
      Date:   Wed Feb 5 09:30:32 2014 -0800
      
          Version number to 1.0.0-SNAPSHOT
      c2341c92
Loading