Skip to content
Snippets Groups Projects
  1. Jul 19, 2016
  2. Jun 06, 2016
    • Zheng RuiFeng's avatar
      [MINOR] Fix Typos 'an -> a' · fd8af397
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      
      `an -> a`
      
      Use cmds like `find . -name '*.R' | xargs -i sh -c "grep -in ' an [^aeiou]' {} && echo {}"` to generate candidates, and review them one by one.
      
      ## How was this patch tested?
      manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #13515 from zhengruifeng/an_a.
      fd8af397
  3. May 09, 2016
    • Holden Karau's avatar
      [SPARK-15136][PYSPARK][DOC] Fix links to sphinx style and add a default param doc note · 12fe2ecd
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      PyDoc links in ml are in non-standard format. Switch to standard sphinx link format for better formatted documentation. Also add a note about default value in one place. Copy some extended docs from scala for GBT
      
      ## How was this patch tested?
      
      Built docs locally.
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #12918 from holdenk/SPARK-15137-linkify-pyspark-ml-classification.
      12fe2ecd
  4. Apr 04, 2016
    • Yong Tang's avatar
      [SPARK-14368][PYSPARK] Support python.spark.worker.memory with upper-case unit. · 7db56244
      Yong Tang authored
      ## What changes were proposed in this pull request?
      
      This fix tries to address the issue in PySpark where `spark.python.worker.memory`
      could only be configured with a lower case unit (`k`, `m`, `g`, `t`). This fix
      allows the upper case unit (`K`, `M`, `G`, `T`) to be used as well. This is to
      conform to the JVM memory string as is specified in the documentation .
      
      ## How was this patch tested?
      
      This fix adds additional test to cover the changes.
      
      Author: Yong Tang <yong.tang.github@outlook.com>
      
      Closes #12163 from yongtang/SPARK-14368.
      7db56244
    • Davies Liu's avatar
      [SPARK-14334] [SQL] add toLocalIterator for Dataset/DataFrame · cc70f174
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      RDD.toLocalIterator() could be used to fetch one partition at a time to reduce the memory usage. Right now, for Dataset/Dataframe we have to use df.rdd.toLocalIterator, which is super slow also requires lots of memory (because of the Java serializer or even Kyro serializer).
      
      This PR introduce an optimized toLocalIterator for Dataset/DataFrame, which is much faster and requires much less memory. For a partition with 5 millions rows, `df.rdd.toIterator` took about 100 seconds, but df.toIterator took less than 7 seconds. For 10 millions row, rdd.toIterator will crash (not enough memory) with 4G heap, but df.toLocalIterator could finished in 12 seconds.
      
      The JDBC server has been updated to use DataFrame.toIterator.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12114 from davies/local_iterator.
      cc70f174
  5. Feb 24, 2016
    • Wenchen Fan's avatar
      [SPARK-13467] [PYSPARK] abstract python function to simplify pyspark code · a60f9128
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      When we pass a Python function to JVM side, we also need to send its context, e.g. `envVars`, `pythonIncludes`, `pythonExec`, etc. However, it's annoying to pass around so many parameters at many places. This PR abstract python function along with its context, to simplify some pyspark code and make the logic more clear.
      
      ## How was the this patch tested?
      
      by existing unit tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11342 from cloud-fan/python-clean.
      a60f9128
  6. Feb 19, 2016
  7. Feb 06, 2016
  8. Jan 19, 2016
  9. Dec 18, 2015
    • gatorsmile's avatar
      [SPARK-12091] [PYSPARK] Deprecate the JAVA-specific deserialized storage levels · 499ac3e6
      gatorsmile authored
      The current default storage level of Python persist API is MEMORY_ONLY_SER. This is different from the default level MEMORY_ONLY in the official document and RDD APIs.
      
      davies Is this inconsistency intentional? Thanks!
      
      Updates: Since the data is always serialized on the Python side, the storage levels of JAVA-specific deserialization are not removed, such as MEMORY_ONLY.
      
      Updates: Based on the reviewers' feedback. In Python, stored objects will always be serialized with the [Pickle](https://docs.python.org/2/library/pickle.html) library, so it does not matter whether you choose a serialized level. The available storage levels in Python include `MEMORY_ONLY`, `MEMORY_ONLY_2`, `MEMORY_AND_DISK`, `MEMORY_AND_DISK_2`, `DISK_ONLY`, `DISK_ONLY_2` and `OFF_HEAP`.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #10092 from gatorsmile/persistStorageLevel.
      499ac3e6
  10. Dec 02, 2015
  11. Nov 12, 2015
  12. Sep 22, 2015
    • Holden Karau's avatar
      [SPARK-9821] [PYSPARK] pyspark-reduceByKey-should-take-a-custom-partitioner · 1cd67415
      Holden Karau authored
      from the issue:
      
      In Scala, I can supply a custom partitioner to reduceByKey (and other aggregation/repartitioning methods like aggregateByKey and combinedByKey), but as far as I can tell from the Pyspark API, there's no way to do the same in Python.
      Here's an example of my code in Scala:
      weblogs.map(s => (getFileType(s), 1)).reduceByKey(new FileTypePartitioner(),_+_)
      But I can't figure out how to do the same in Python. The closest I can get is to call repartition before reduceByKey like so:
      weblogs.map(lambda s: (getFileType(s), 1)).partitionBy(3,hash_filetype).reduceByKey(lambda v1,v2: v1+v2).collect()
      But that defeats the purpose, because I'm shuffling twice instead of once, so my performance is worse instead of better.
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #8569 from holdenk/SPARK-9821-pyspark-reduceByKey-should-take-a-custom-partitioner.
      1cd67415
  13. Sep 19, 2015
    • Josh Rosen's avatar
      [SPARK-10710] Remove ability to disable spilling in core and SQL · 2117eea7
      Josh Rosen authored
      It does not make much sense to set `spark.shuffle.spill` or `spark.sql.planner.externalSort` to false: I believe that these configurations were initially added as "escape hatches" to guard against bugs in the external operators, but these operators are now mature and well-tested. In addition, these configurations are not handled in a consistent way anymore: SQL's Tungsten codepath ignores these configurations and will continue to use spilling operators. Similarly, Spark Core's `tungsten-sort` shuffle manager does not respect `spark.shuffle.spill=false`.
      
      This pull request removes these configurations, adds warnings at the appropriate places, and deletes a large amount of code which was only used in code paths that did not support spilling.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #8831 from JoshRosen/remove-ability-to-disable-spilling.
      2117eea7
  14. Sep 17, 2015
  15. Aug 14, 2015
  16. Jul 22, 2015
    • Josh Rosen's avatar
      [SPARK-9144] Remove DAGScheduler.runLocallyWithinThread and spark.localExecution.enabled · b217230f
      Josh Rosen authored
      Spark has an option called spark.localExecution.enabled; according to the docs:
      
      > Enables Spark to run certain jobs, such as first() or take() on the driver, without sending tasks to the cluster. This can make certain jobs execute very quickly, but may require shipping a whole partition of data to the driver.
      
      This feature ends up adding quite a bit of complexity to DAGScheduler, especially in the runLocallyWithinThread method, but as far as I know nobody uses this feature (I searched the mailing list and haven't seen any recent mentions of the configuration nor stacktraces including the runLocally method). As a step towards scheduler complexity reduction, I propose that we remove this feature and all code related to it for Spark 1.5.
      
      This pull request simply brings #7484 up to date.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7585 from rxin/remove-local-exec and squashes the following commits:
      
      84bd10e [Reynold Xin] Python fix.
      1d9739a [Reynold Xin] Merge pull request #7484 from JoshRosen/remove-localexecution
      eec39fa [Josh Rosen] Remove allowLocal(); deprecate user-facing uses of it.
      b0835dc [Josh Rosen] Remove local execution code in DAGScheduler
      8975d96 [Josh Rosen] Remove local execution tests.
      ffa8c9b [Josh Rosen] Remove documentation for configuration
      b217230f
  17. Jul 19, 2015
    • Nicholas Hwang's avatar
      [SPARK-9021] [PYSPARK] Change RDD.aggregate() to do reduce(mapPartitions())... · a803ac3e
      Nicholas Hwang authored
      [SPARK-9021] [PYSPARK] Change RDD.aggregate() to do reduce(mapPartitions()) instead of mapPartitions.fold()
      
      I'm relatively new to Spark and functional programming, so forgive me if this pull request is just a result of my misunderstanding of how Spark should be used.
      
      Currently, if one happens to use a mutable object as `zeroValue` for `RDD.aggregate()`, possibly unexpected behavior can occur.
      
      This is because pyspark's current implementation of `RDD.aggregate()` does not serialize or make a copy of `zeroValue` before handing it off to `RDD.mapPartitions(...).fold(...)`. This results in a single reference to `zeroValue` being used for both `RDD.mapPartitions()` and `RDD.fold()` on each partition. This can result in strange accumulator values being fed into each partition's call to `RDD.fold()`, as the `zeroValue` may have been changed in-place during the `RDD.mapPartitions()` call.
      
      As an illustrative example, submit the following to `spark-submit`:
      ```
      from pyspark import SparkConf, SparkContext
      import collections
      
      def updateCounter(acc, val):
          print 'update acc:', acc
          print 'update val:', val
          acc[val] += 1
          return acc
      
      def comboCounter(acc1, acc2):
          print 'combo acc1:', acc1
          print 'combo acc2:', acc2
          acc1.update(acc2)
          return acc1
      
      def main():
          conf = SparkConf().setMaster("local").setAppName("Aggregate with Counter")
          sc = SparkContext(conf = conf)
      
          print '======= AGGREGATING with ONE PARTITION ======='
          print sc.parallelize(range(1,10), 1).aggregate(collections.Counter(), updateCounter, comboCounter)
      
          print '======= AGGREGATING with TWO PARTITIONS ======='
          print sc.parallelize(range(1,10), 2).aggregate(collections.Counter(), updateCounter, comboCounter)
      
      if __name__ == "__main__":
          main()
      ```
      
      One probably expects this to output the following:
      ```
      Counter({1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1})
      ```
      
      But it instead outputs this (regardless of the number of partitions):
      ```
      Counter({1: 2, 2: 2, 3: 2, 4: 2, 5: 2, 6: 2, 7: 2, 8: 2, 9: 2})
      ```
      
      This is because (I believe) `zeroValue` gets passed correctly to each partition, but after `RDD.mapPartitions()` completes, the `zeroValue` object has been updated and is then passed to `RDD.fold()`, which results in all items being double-counted within each partition before being finally reduced at the calling node.
      
      I realize that this type of calculation is typically done by `RDD.mapPartitions(...).reduceByKey(...)`, but hopefully this illustrates some potentially confusing behavior. I also noticed that other `RDD` methods use this `deepcopy` approach to creating unique copies of `zeroValue` (i.e., `RDD.aggregateByKey()` and `RDD.foldByKey()`), and that the Scala implementations do seem to serialize the `zeroValue` object appropriately to prevent this type of behavior.
      
      Author: Nicholas Hwang <moogling@gmail.com>
      
      Closes #7378 from njhwang/master and squashes the following commits:
      
      659bb27 [Nicholas Hwang] Fixed RDD.aggregate() to perform a reduce operation on collected mapPartitions results, similar to how fold currently is implemented. This prevents an initial combOp being performed on each partition with zeroValue (which leads to unexpected behavior if zeroValue is a mutable object) before being combOp'ed with other partition results.
      8d8d694 [Nicholas Hwang] Changed dict construction to be compatible with Python 2.6 (cannot use list comprehensions to make dicts)
      56eb2ab [Nicholas Hwang] Fixed whitespace after colon to conform with PEP8
      391de4a [Nicholas Hwang] Removed used of collections.Counter from RDD tests for Python 2.6 compatibility; used defaultdict(int) instead. Merged treeAggregate test with mutable zero value into aggregate test to reduce code duplication.
      2fa4e4b [Nicholas Hwang] Merge branch 'master' of https://github.com/njhwang/spark
      ba528bd [Nicholas Hwang] Updated comments regarding protection of zeroValue from mutation in RDD.aggregate(). Added regression tests for aggregate(), fold(), aggregateByKey(), foldByKey(), and treeAggregate(), all with both 1 and 2 partition RDDs. Confirmed that aggregate() is the only problematic implementation as of commit 257236c3. Also replaced some parallelizations of ranges with xranges, per the documentation's recommendations of preferring xrange over range.
      7820391 [Nicholas Hwang] Updated comments regarding protection of zeroValue from mutation in RDD.aggregate(). Added regression tests for aggregate(), fold(), aggregateByKey(), foldByKey(), and treeAggregate(), all with both 1 and 2 partition RDDs. Confirmed that aggregate() is the only problematic implementation as of commit 257236c3.
      90d1544 [Nicholas Hwang] Made sure RDD.aggregate() makes a deepcopy of zeroValue for all partitions; this ensures that the mapPartitions call works with unique copies of zeroValue in each partition, and prevents a single reference to zeroValue being used for both map and fold calls on each partition (resulting in possibly unexpected behavior).
      a803ac3e
  18. Jul 10, 2015
    • Scott Taylor's avatar
      [SPARK-7735] [PYSPARK] Raise Exception on non-zero exit from pipe commands · 6e1c7e27
      Scott Taylor authored
      This will allow problems with piped commands to be detected.
      This will also allow tasks to be retried where errors are rare (such as network problems in piped commands).
      
      Author: Scott Taylor <github@megatron.me.uk>
      
      Closes #6262 from megatron-me-uk/patch-2 and squashes the following commits:
      
      04ae1d5 [Scott Taylor] Remove spurious empty line
      98fa101 [Scott Taylor] fix blank line style error
      574b564 [Scott Taylor] Merge pull request #2 from megatron-me-uk/patch-4
      0c1e762 [Scott Taylor] Update rdd pipe method for checkCode
      ab9a2e1 [Scott Taylor] Update rdd pipe tests for checkCode
      eb4801c [Scott Taylor] fix fail_condition
      b0ac3a4 [Scott Taylor] Merge pull request #1 from megatron-me-uk/megatron-me-uk-patch-1
      a307d13 [Scott Taylor] update rdd tests to test pipe modes
      34fcdc3 [Scott Taylor] add optional argument 'mode' for rdd.pipe
      a0c0161 [Scott Taylor] fix generator issue
      8a9ef9c [Scott Taylor] make check_return_code an iterator
      0486ae3 [Scott Taylor] style fixes
      8ed89a6 [Scott Taylor] Chain generators to prevent potential deadlock
      4153b02 [Scott Taylor] fix list.sort returns None
      491d3fc [Scott Taylor] Pass a function handle to assertRaises
      3344a21 [Scott Taylor] wrap assertRaises with QuietTest
      3ab8c7a [Scott Taylor] remove whitespace for style
      cc1a73d [Scott Taylor] fix style issues in pipe test
      8db4073 [Scott Taylor] Add a test for rdd pipe functions
      1b3dc4e [Scott Taylor] fix missing space around operator style
      0974f98 [Scott Taylor] add space between words in multiline string
      45f4977 [Scott Taylor] fix line too long style error
      5745d85 [Scott Taylor] Remove space to fix style
      f552d49 [Scott Taylor] Catch non-zero exit from pipe commands
      6e1c7e27
  19. Jun 30, 2015
    • Davies Liu's avatar
      [SPARK-8738] [SQL] [PYSPARK] capture SQL AnalysisException in Python API · 58ee2a2e
      Davies Liu authored
      Capture the AnalysisException in SQL, hide the long java stack trace, only show the error message.
      
      cc rxin
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7135 from davies/ananylis and squashes the following commits:
      
      dad7ae7 [Davies Liu] add comment
      ec0c0e8 [Davies Liu] Update utils.py
      cdd7edd [Davies Liu] add doc
      7b044c2 [Davies Liu] fix python 3
      f84d3bd [Davies Liu] capture SQL AnalysisException in Python API
      58ee2a2e
  20. Jun 29, 2015
    • Ai He's avatar
      [SPARK-7810] [PYSPARK] solve python rdd socket connection problem · ecd3aacf
      Ai He authored
      Method "_load_from_socket" in rdd.py cannot load data from jvm socket when ipv6 is used. The current method only works well with ipv4. New modification should work around both two protocols.
      
      Author: Ai He <ai.he@ussuning.com>
      Author: AiHe <ai.he@ussuning.com>
      
      Closes #6338 from AiHe/pyspark-networking-issue and squashes the following commits:
      
      d4fc9c4 [Ai He] handle code review 2
      e75c5c8 [Ai He] handle code review
      5644953 [AiHe] solve python rdd socket connection problem to jvm
      ecd3aacf
  21. Jun 23, 2015
    • Scott Taylor's avatar
      [SPARK-8541] [PYSPARK] test the absolute error in approx doctests · f0dcbe8a
      Scott Taylor authored
      A minor change but one which is (presumably) visible on the public api docs webpage.
      
      Author: Scott Taylor <github@megatron.me.uk>
      
      Closes #6942 from megatron-me-uk/patch-3 and squashes the following commits:
      
      fbed000 [Scott Taylor] test the absolute error in approx doctests
      f0dcbe8a
  22. Jun 17, 2015
  23. May 21, 2015
    • Sean Owen's avatar
      [SPARK-6416] [DOCS] RDD.fold() requires the operator to be commutative · 6e534026
      Sean Owen authored
      Document current limitation of rdd.fold.
      
      This does not resolve SPARK-6416 but just documents the issue.
      CC JoshRosen
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #6231 from srowen/SPARK-6416 and squashes the following commits:
      
      9fef39f [Sean Owen] Add comment to other languages; reword to highlight the difference from non-distributed collections and to not suggest it is a bug that is to be fixed
      da40d84 [Sean Owen] Document current limitation of rdd.fold.
      6e534026
  24. May 18, 2015
    • Davies Liu's avatar
      [SPARK-6216] [PYSPARK] check python version of worker with driver · 32fbd297
      Davies Liu authored
      This PR revert #5404, change to pass the version of python in driver into JVM, check it in worker before deserializing closure, then it can works with different major version of Python.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6203 from davies/py_version and squashes the following commits:
      
      b8fb76e [Davies Liu] fix test
      6ce5096 [Davies Liu] use string for version
      47c6278 [Davies Liu] check python version of worker with driver
      32fbd297
  25. May 09, 2015
    • Vinod K C's avatar
      [SPARK-7438] [SPARK CORE] Fixed validation of relativeSD in countApproxDistinct · dda6d9f4
      Vinod K C authored
      Author: Vinod K C <vinod.kc@huawei.com>
      
      Closes #5974 from vinodkc/fix_countApproxDistinct_Validation and squashes the following commits:
      
      3a3d59c [Vinod K C] Reverted removal of validation relativeSD<0.000017
      799976e [Vinod K C] Removed testcase to assert IAE when relativeSD>3.7
      8ddbfae [Vinod K C] Remove blank line
      b1b00a3 [Vinod K C] Removed relativeSD validation from python API,RDD.scala will do validation
      122d378 [Vinod K C] Fixed validation of relativeSD in  countApproxDistinct
      dda6d9f4
  26. Apr 21, 2015
    • Davies Liu's avatar
      [SPARK-6949] [SQL] [PySpark] Support Date/Timestamp in Column expression · ab9128fb
      Davies Liu authored
      This PR enable auto_convert in JavaGateway, then we could register a converter for a given types, for example, date and datetime.
      
      There are two bugs related to auto_convert, see [1] and [2], we workaround it in this PR.
      
      [1]  https://github.com/bartdag/py4j/issues/160
      [2] https://github.com/bartdag/py4j/issues/161
      
      cc rxin JoshRosen
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #5570 from davies/py4j_date and squashes the following commits:
      
      eb4fa53 [Davies Liu] fix tests in python 3
      d17d634 [Davies Liu] rollback changes in mllib
      2e7566d [Davies Liu] convert tuple into ArrayList
      ceb3779 [Davies Liu] Update rdd.py
      3c373f3 [Davies Liu] support date and datetime by auto_convert
      cb094ff [Davies Liu] enable auto convert
      ab9128fb
  27. Apr 16, 2015
    • Davies Liu's avatar
      [SPARK-4897] [PySpark] Python 3 support · 04e44b37
      Davies Liu authored
      This PR update PySpark to support Python 3 (tested with 3.4).
      
      Known issue: unpickle array from Pyrolite is broken in Python 3, those tests are skipped.
      
      TODO: ec2/spark-ec2.py is not fully tested with python3.
      
      Author: Davies Liu <davies@databricks.com>
      Author: twneale <twneale@gmail.com>
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #5173 from davies/python3 and squashes the following commits:
      
      d7d6323 [Davies Liu] fix tests
      6c52a98 [Davies Liu] fix mllib test
      99e334f [Davies Liu] update timeout
      b716610 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      cafd5ec [Davies Liu] adddress comments from @mengxr
      bf225d7 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      179fc8d [Davies Liu] tuning flaky tests
      8c8b957 [Davies Liu] fix ResourceWarning in Python 3
      5c57c95 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      4006829 [Davies Liu] fix test
      2fc0066 [Davies Liu] add python3 path
      71535e9 [Davies Liu] fix xrange and divide
      5a55ab4 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      125f12c [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      ed498c8 [Davies Liu] fix compatibility with python 3
      820e649 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      e8ce8c9 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      ad7c374 [Davies Liu] fix mllib test and warning
      ef1fc2f [Davies Liu] fix tests
      4eee14a [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      20112ff [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      59bb492 [Davies Liu] fix tests
      1da268c [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      ca0fdd3 [Davies Liu] fix code style
      9563a15 [Davies Liu] add imap back for python 2
      0b1ec04 [Davies Liu] make python examples work with Python 3
      d2fd566 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      a716d34 [Davies Liu] test with python 3.4
      f1700e8 [Davies Liu] fix test in python3
      671b1db [Davies Liu] fix test in python3
      692ff47 [Davies Liu] fix flaky test
      7b9699f [Davies Liu] invalidate import cache for Python 3.3+
      9c58497 [Davies Liu] fix kill worker
      309bfbf [Davies Liu] keep compatibility
      5707476 [Davies Liu] cleanup, fix hash of string in 3.3+
      8662d5b [Davies Liu] Merge branch 'master' of github.com:apache/spark into python3
      f53e1f0 [Davies Liu] fix tests
      70b6b73 [Davies Liu] compile ec2/spark_ec2.py in python 3
      a39167e [Davies Liu] support customize class in __main__
      814c77b [Davies Liu] run unittests with python 3
      7f4476e [Davies Liu] mllib tests passed
      d737924 [Davies Liu] pass ml tests
      375ea17 [Davies Liu] SQL tests pass
      6cc42a9 [Davies Liu] rename
      431a8de [Davies Liu] streaming tests pass
      78901a7 [Davies Liu] fix hash of serializer in Python 3
      24b2f2e [Davies Liu] pass all RDD tests
      35f48fe [Davies Liu] run future again
      1eebac2 [Davies Liu] fix conflict in ec2/spark_ec2.py
      6e3c21d [Davies Liu] make cloudpickle work with Python3
      2fb2db3 [Josh Rosen] Guard more changes behind sys.version; still doesn't run
      1aa5e8f [twneale] Turned out `pickle.DictionaryType is dict` == True, so swapped it out
      7354371 [twneale] buffer --> memoryview  I'm not super sure if this a valid change, but the 2.7 docs recommend using memoryview over buffer where possible, so hoping it'll work.
      b69ccdf [twneale] Uses the pure python pickle._Pickler instead of c-extension _pickle.Pickler. It appears pyspark 2.7 uses the pure python pickler as well, so this shouldn't degrade pickling performance (?).
      f40d925 [twneale] xrange --> range
      e104215 [twneale] Replaces 2.7 types.InstsanceType with 3.4 `object`....could be horribly wrong depending on how types.InstanceType is used elsewhere in the package--see http://bugs.python.org/issue8206
      79de9d0 [twneale] Replaces python2.7 `file` with 3.4 _io.TextIOWrapper
      2adb42d [Josh Rosen] Fix up some import differences between Python 2 and 3
      854be27 [Josh Rosen] Run `futurize` on Python code:
      7c5b4ce [Josh Rosen] Remove Python 3 check in shell.py.
      04e44b37
  28. Apr 15, 2015
    • Davies Liu's avatar
      [SPARK-6886] [PySpark] fix big closure with shuffle · f11288d5
      Davies Liu authored
      Currently, the created broadcast object will have same life cycle as RDD in Python. For multistage jobs, an PythonRDD will be created in JVM and the RDD in Python may be GCed, then the broadcast will be destroyed in JVM before the PythonRDD.
      
      This PR change to use PythonRDD to track the lifecycle of the broadcast object. It also have a refactor about getNumPartitions() to avoid unnecessary creation of PythonRDD, which could be heavy.
      
      cc JoshRosen
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #5496 from davies/big_closure and squashes the following commits:
      
      9a0ea4c [Davies Liu] fix big closure with shuffle
      f11288d5
  29. Apr 10, 2015
    • Davies Liu's avatar
      [SPARK-6216] [PySpark] check the python version in worker · 4740d6a1
      Davies Liu authored
      Author: Davies Liu <davies@databricks.com>
      
      Closes #5404 from davies/check_version and squashes the following commits:
      
      e559248 [Davies Liu] add tests
      ec33b5f [Davies Liu] check the python version in worker
      4740d6a1
    • Milan Straka's avatar
      [SPARK-5969][PySpark] Fix descending pyspark.rdd.sortByKey. · 0375134f
      Milan Straka authored
      The samples should always be sorted in ascending order, because bisect.bisect_left is used on it. The reverse order of the result is already achieved in rangePartitioner by reversing the found index.
      
      The current implementation also work, but always uses only two partitions -- the first one and the last one (because the bisect_left return returns either "beginning" or "end" for a descending sequence).
      
      Author: Milan Straka <fox@ucw.cz>
      
      This patch had conflicts when merged, resolved by
      Committer: Josh Rosen <joshrosen@databricks.com>
      
      Closes #4761 from foxik/fix-descending-sort and squashes the following commits:
      
      95896b5 [Milan Straka] Add regression test for SPARK-5969.
      5757490 [Milan Straka] Fix descending pyspark.rdd.sortByKey.
      0375134f
  30. Apr 09, 2015
    • Davies Liu's avatar
      [SPARK-3074] [PySpark] support groupByKey() with single huge key · b5c51c8d
      Davies Liu authored
      This patch change groupByKey() to use external sort based approach, so it can support single huge key.
      
      For example, it can group by a dataset including one hot key with 40 millions values (strings), using 500M memory for Python worker, finished in about 2 minutes. (it will need 6G memory in hash based approach).
      
      During groupByKey(), it will do in-memory groupBy first. If the dataset can not fit in memory, then data will be partitioned by hash. If one partition still can not fit in memory, it will switch to sort based groupBy().
      
      Author: Davies Liu <davies.liu@gmail.com>
      Author: Davies Liu <davies@databricks.com>
      
      Closes #1977 from davies/groupby and squashes the following commits:
      
      af3713a [Davies Liu] make sure it's iterator
      67772dd [Davies Liu] fix tests
      e78c15c [Davies Liu] address comments
      0b0fde8 [Davies Liu] address comments
      0dcf320 [Davies Liu] address comments, rollback changes in ResultIterable
      e3b8eab [Davies Liu] fix narrow dependency
      2a1857a [Davies Liu] typo
      d2f053b [Davies Liu] add repr for FlattedValuesSerializer
      c6a2f8d [Davies Liu] address comments
      9e2df24 [Davies Liu] Merge branch 'master' of github.com:apache/spark into groupby
      2b9c261 [Davies Liu] fix typo in comments
      70aadcd [Davies Liu] Merge branch 'master' of github.com:apache/spark into groupby
      a14b4bd [Davies Liu] Merge branch 'master' of github.com:apache/spark into groupby
      ab5515b [Davies Liu] Merge branch 'master' into groupby
      651f891 [Davies Liu] simplify GroupByKey
      1578f2e [Davies Liu] Merge branch 'master' of github.com:apache/spark into groupby
      1f69f93 [Davies Liu] fix tests
      0d3395f [Davies Liu] Merge branch 'master' of github.com:apache/spark into groupby
      341f1e0 [Davies Liu] add comments, refactor
      47918b8 [Davies Liu] remove unused code
      6540948 [Davies Liu] address comments:
      17f4ec6 [Davies Liu] Merge branch 'master' of github.com:apache/spark into groupby
      4d4bc86 [Davies Liu] bugfix
      8ef965e [Davies Liu] Merge branch 'master' into groupby
      fbc504a [Davies Liu] Merge branch 'master' into groupby
      779ed03 [Davies Liu] fix merge conflict
      2c1d05b [Davies Liu] refactor, minor turning
      b48cda5 [Davies Liu] Merge branch 'master' into groupby
      85138e6 [Davies Liu] Merge branch 'master' into groupby
      acd8e1b [Davies Liu] fix memory when groupByKey().count()
      905b233 [Davies Liu] Merge branch 'sort' into groupby
      1f075ed [Davies Liu] Merge branch 'master' into sort
      4b07d39 [Davies Liu] compress the data while spilling
      0a081c6 [Davies Liu] Merge branch 'master' into groupby
      f157fe7 [Davies Liu] Merge branch 'sort' into groupby
      eb53ca6 [Davies Liu] Merge branch 'master' into sort
      b2dc3bf [Davies Liu] Merge branch 'sort' into groupby
      644abaf [Davies Liu] add license in LICENSE
      19f7873 [Davies Liu] improve tests
      11ba318 [Davies Liu] typo
      085aef8 [Davies Liu] Merge branch 'master' into groupby
      3ee58e5 [Davies Liu] switch to sort based groupBy, based on size of data
      1ea0669 [Davies Liu] choose sort based groupByKey() automatically
      b40bae7 [Davies Liu] bugfix
      efa23df [Davies Liu] refactor, add spark.shuffle.sort=False
      250be4e [Davies Liu] flatten the combined values when dumping into disks
      d05060d [Davies Liu] group the same key before shuffle, reduce the comparison during sorting
      083d842 [Davies Liu] sorted based groupByKey()
      55602ee [Davies Liu] use external sort in sortBy() and sortByKey()
      b5c51c8d
  31. Apr 02, 2015
    • Davies Liu's avatar
      [SPARK-6667] [PySpark] remove setReuseAddress · 0cce5451
      Davies Liu authored
      The reused address on server side had caused the server can not acknowledge the connected connections, remove it.
      
      This PR will retry once after timeout, it also add a timeout at client side.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #5324 from davies/collect_hang and squashes the following commits:
      
      e5a51a2 [Davies Liu] remove setReuseAddress
      7977c2f [Davies Liu] do retry on client side
      b838f35 [Davies Liu] retry after timeout
      0cce5451
  32. Mar 20, 2015
    • mbonaci's avatar
      [SPARK-6370][core] Documentation: Improve all 3 docs for RDD.sample · 28bcb9e9
      mbonaci authored
      The docs for the `sample` method were insufficient, now less so.
      
      Author: mbonaci <mbonaci@gmail.com>
      
      Closes #5097 from mbonaci/master and squashes the following commits:
      
      a6a9d97 [mbonaci] [SPARK-6370][core] Documentation: Improve all 3 docs for RDD.sample method
      28bcb9e9
  33. Mar 09, 2015
    • Davies Liu's avatar
      [SPARK-6194] [SPARK-677] [PySpark] fix memory leak in collect() · 8767565c
      Davies Liu authored
      Because circular reference between JavaObject and JavaMember, an Java object can not be released until Python GC kick in, then it will cause memory leak in collect(), which may consume lots of memory in JVM.
      
      This PR change the way we sending collected data back into Python from local file to socket, which could avoid any disk IO during collect, also avoid any referrers of Java object in Python.
      
      cc JoshRosen
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4923 from davies/fix_collect and squashes the following commits:
      
      d730286 [Davies Liu] address comments
      24c92a4 [Davies Liu] fix style
      ba54614 [Davies Liu] use socket to transfer data from JVM
      9517c8f [Davies Liu] fix memory leak in collect()
      8767565c
  34. Feb 25, 2015
    • Davies Liu's avatar
      [SPARK-5944] [PySpark] fix version in Python API docs · f3f4c87b
      Davies Liu authored
      use RELEASE_VERSION when building the Python API docs
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4731 from davies/api_version and squashes the following commits:
      
      c9744c9 [Davies Liu] Update create-release.sh
      08cbc3f [Davies Liu] fix python docs
      f3f4c87b
  35. Feb 24, 2015
  36. Feb 17, 2015
    • Davies Liu's avatar
      [SPARK-5785] [PySpark] narrow dependency for cogroup/join in PySpark · c3d2b90b
      Davies Liu authored
      Currently, PySpark does not support narrow dependency during cogroup/join when the two RDDs have the partitioner, another unnecessary shuffle stage will come in.
      
      The Python implementation of cogroup/join is different than Scala one, it depends on union() and partitionBy(). This patch will try to use PartitionerAwareUnionRDD() in union(), when all the RDDs have the same partitioner. It also fix `reservePartitioner` in all the map() or mapPartitions(), then partitionBy() can skip the unnecessary shuffle stage.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4629 from davies/narrow and squashes the following commits:
      
      dffe34e [Davies Liu] improve test, check number of stages for join/cogroup
      1ed3ba2 [Davies Liu] Merge branch 'master' of github.com:apache/spark into narrow
      4d29932 [Davies Liu] address comment
      cc28d97 [Davies Liu] add unit tests
      940245e [Davies Liu] address comments
      ff5a0a6 [Davies Liu] skip the partitionBy() on Python side
      eb26c62 [Davies Liu] narrow dependency in PySpark
      c3d2b90b
  37. Feb 06, 2015
  38. Feb 04, 2015
    • Davies Liu's avatar
      [SPARK-5577] Python udf for DataFrame · dc101b0e
      Davies Liu authored
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4351 from davies/python_udf and squashes the following commits:
      
      d250692 [Davies Liu] fix conflict
      34234d4 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python_udf
      440f769 [Davies Liu] address comments
      f0a3121 [Davies Liu] track life cycle of broadcast
      f99b2e1 [Davies Liu] address comments
      462b334 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python_udf
      7bccc3b [Davies Liu] python udf
      58dee20 [Davies Liu] clean up
      dc101b0e
Loading