Skip to content
Snippets Groups Projects
  1. Jun 30, 2015
    • MechCoder's avatar
      [SPARK-4127] [MLLIB] [PYSPARK] Python bindings for StreamingLinearRegressionWithSGD · 45281664
      MechCoder authored
      Python bindings for StreamingLinearRegressionWithSGD
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #6744 from MechCoder/spark-4127 and squashes the following commits:
      
      d8f6457 [MechCoder] Moved StreamingLinearAlgorithm to pyspark.mllib.regression
      d47cc24 [MechCoder] Inherit from StreamingLinearAlgorithm
      1b4ddd6 [MechCoder] minor
      4de6c68 [MechCoder] Minor refactor
      5e85a3b [MechCoder] Add tests for simultaneous training and prediction
      fb27889 [MechCoder] Add example and docs
      505380b [MechCoder] Add tests
      d42bdae [MechCoder] [SPARK-4127] Python bindings for StreamingLinearRegressionWithSGD
      45281664
  2. Jun 22, 2015
  3. Jun 16, 2015
    • Yanbo Liang's avatar
      [SPARK-7916] [MLLIB] MLlib Python doc parity check for classification and regression · ca998757
      Yanbo Liang authored
      Check then make the MLlib Python classification and regression doc to be as complete as the Scala doc.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #6460 from yanboliang/spark-7916 and squashes the following commits:
      
      f8deda4 [Yanbo Liang] trigger jenkins
      6dc4d99 [Yanbo Liang] address comments
      ce2a43e [Yanbo Liang] truncate too long line and remove extra sparse
      3eaf6ad [Yanbo Liang] MLlib Python doc parity check for classification and regression
      ca998757
  4. May 06, 2015
    • Yanbo Liang's avatar
      [SPARK-6267] [MLLIB] Python API for IsotonicRegression · 7b145783
      Yanbo Liang authored
      https://issues.apache.org/jira/browse/SPARK-6267
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #5890 from yanboliang/spark-6267 and squashes the following commits:
      
      f20541d [Yanbo Liang] Merge pull request #3 from mengxr/SPARK-6267
      7f202f9 [Xiangrui Meng] use Vector to have the best Python 2&3 compatibility
      4bccfee [Yanbo Liang] fix doctest
      ec09412 [Yanbo Liang] fix typos
      8214bbb [Yanbo Liang] fix code style
      5c8ebe5 [Yanbo Liang] Python API for IsotonicRegression
      7b145783
  5. Apr 21, 2015
    • Reynold Xin's avatar
      [SPARK-6953] [PySpark] speed up python tests · 3134c3fe
      Reynold Xin authored
      This PR try to speed up some python tests:
      
      ```
      tests.py                       144s -> 103s      -41s
      mllib/classification.py         24s -> 17s        -7s
      mllib/regression.py             27s -> 15s       -12s
      mllib/tree.py                   27s -> 13s       -14s
      mllib/tests.py                  64s -> 31s       -33s
      streaming/tests.py             185s -> 84s      -101s
      ```
      Considering python3, the total saving will be 558s (almost 10 minutes) (core, and streaming run three times, mllib runs twice).
      
      During testing, it will show used time for each test file:
      ```
      Run core tests ...
      Running test: pyspark/rdd.py ... ok (22s)
      Running test: pyspark/context.py ... ok (16s)
      Running test: pyspark/conf.py ... ok (4s)
      Running test: pyspark/broadcast.py ... ok (4s)
      Running test: pyspark/accumulators.py ... ok (4s)
      Running test: pyspark/serializers.py ... ok (6s)
      Running test: pyspark/profiler.py ... ok (5s)
      Running test: pyspark/shuffle.py ... ok (1s)
      Running test: pyspark/tests.py ... ok (103s)   144s
      ```
      
      Author: Reynold Xin <rxin@databricks.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #5605 from rxin/python-tests-speed and squashes the following commits:
      
      d08542d [Reynold Xin] Merge pull request #14 from mengxr/SPARK-6953
      89321ee [Xiangrui Meng] fix seed in tests
      3ad2387 [Reynold Xin] Merge pull request #5427 from davies/python_tests
      3134c3fe
  6. Apr 20, 2015
  7. Mar 31, 2015
    • Yanbo Liang's avatar
      [SPARK-6255] [MLLIB] Support multiclass classification in Python API · b5bd75d9
      Yanbo Liang authored
      Python API parity check for classification and multiclass classification support, major disparities need to be added for Python:
      ```scala
      LogisticRegressionWithLBFGS
          setNumClasses
          setValidateData
      LogisticRegressionModel
          getThreshold
          numClasses
          numFeatures
      SVMWithSGD
          setValidateData
      SVMModel
          getThreshold
      ```
      For users the greatest benefit in this PR is multiclass classification was supported by Python API.
      Users can train multiclass classification model and use it to predict in pyspark.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #5137 from yanboliang/spark-6255 and squashes the following commits:
      
      0bd531e [Yanbo Liang] address comments
      444d5e2 [Yanbo Liang] LogisticRegressionModel.predict() optimization
      fc7990b [Yanbo Liang] address comments
      b0d9c63 [Yanbo Liang] Support Mulinomial LR model predict in Python API
      ded847c [Yanbo Liang] Python API parity check for classification (support multiclass classification)
      b5bd75d9
  8. Mar 25, 2015
    • Yanbo Liang's avatar
      [SPARK-6256] [MLlib] MLlib Python API parity check for regression · 43533738
      Yanbo Liang authored
      MLlib Python API parity check for Regression, major disparities need to be added for Python list following:
      ```scala
      LinearRegressionWithSGD
          setValidateData
      LassoWithSGD
          setIntercept
          setValidateData
      RidgeRegressionWithSGD
          setIntercept
          setValidateData
      ```
      setFeatureScaling is mllib private function which is not needed to expose in pyspark.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #4997 from yanboliang/spark-6256 and squashes the following commits:
      
      102f498 [Yanbo Liang] fix intercept issue & add doc test
      1fb7b4f [Yanbo Liang] change 'intercept' to 'addIntercept'
      de5ecbc [Yanbo Liang] MLlib Python API parity check for regression
      43533738
  9. Mar 20, 2015
    • lewuathe's avatar
      [SPARK-6421][MLLIB] _regression_train_wrapper does not test initialWeights correctly · 257cde7c
      lewuathe authored
      Weight parameters must be initialized correctly even when numpy array is passed as initial weights.
      
      Author: lewuathe <lewuathe@me.com>
      
      Closes #5101 from Lewuathe/SPARK-6421 and squashes the following commits:
      
      7795201 [lewuathe] Fix lint-python errors
      21d4fe3 [lewuathe] Fix init logic of weights
      257cde7c
    • Yanbo Liang's avatar
      [SPARK-6095] [MLLIB] Support model save/load in Python's linear models · 48866f78
      Yanbo Liang authored
      For Python's linear models, weights and intercept are stored in Python.
      This PR implements Python's linear models sava/load functions which do the same thing as scala.
      It can also make model import/export cross languages.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #5016 from yanboliang/spark-6095 and squashes the following commits:
      
      d9bb824 [Yanbo Liang] fix python style
      b3813ca [Yanbo Liang] linear model save/load for Python reuse the Scala implementation
      48866f78
  10. Mar 12, 2015
    • Joseph K. Bradley's avatar
      [mllib] [python] Add LassoModel to __all__ in regression.py · 17c309c8
      Joseph K. Bradley authored
      Add LassoModel to __all__ in regression.py
      
      LassoModel does not show up in Python docs
      
      This should be merged into branch-1.3 and master.
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #4970 from jkbradley/SPARK-6253 and squashes the following commits:
      
      c2cb533 [Joseph K. Bradley] Add LassoModel to __all__ in regression.py
      17c309c8
  11. Feb 25, 2015
    • Joseph K. Bradley's avatar
      [SPARK-5974] [SPARK-5980] [mllib] [python] [docs] Update ML guide with save/load, Python GBT · d20559b1
      Joseph K. Bradley authored
      * Add GradientBoostedTrees Python examples to ML guide
        * I ran these in the pyspark shell, and they worked.
      * Add save/load to examples in ML guide
      * Added note to python docs about predict,transform not working within RDD actions,transformations in some cases (See SPARK-5981)
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #4750 from jkbradley/SPARK-5974 and squashes the following commits:
      
      c410e38 [Joseph K. Bradley] Added note to LabeledPoint about attributes
      bcae18b [Joseph K. Bradley] Added import of models for save/load examples in ml guide.  Fixed line length for tree.py, feature.py (but not other ML Pyspark files yet).
      6d81c3e [Joseph K. Bradley] completed python GBT examples
      9903309 [Joseph K. Bradley] Added note to python docs about predict,transform not working within RDD actions,transformations in some cases
      c7dfad8 [Joseph K. Bradley] Added model save/load to ML guide.  Added GBT examples to ML guide
      d20559b1
  12. Feb 20, 2015
    • Joseph K. Bradley's avatar
      [SPARK-5867] [SPARK-5892] [doc] [ml] [mllib] Doc cleanups for 1.3 release · 4a17eedb
      Joseph K. Bradley authored
      For SPARK-5867:
      * The spark.ml programming guide needs to be updated to use the new SQL DataFrame API instead of the old SchemaRDD API.
      * It should also include Python examples now.
      
      For SPARK-5892:
      * Fix Python docs
      * Various other cleanups
      
      BTW, I accidentally merged this with master.  If you want to compile it on your own, use this branch which is based on spark/branch-1.3 and cherry-picks the commits from this PR: [https://github.com/jkbradley/spark/tree/doc-review-1.3-check]
      
      CC: mengxr  (ML),  davies  (Python docs)
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #4675 from jkbradley/doc-review-1.3 and squashes the following commits:
      
      f191bb0 [Joseph K. Bradley] small cleanups
      e786efa [Joseph K. Bradley] small doc corrections
      6b1ab4a [Joseph K. Bradley] fixed python lint test
      946affa [Joseph K. Bradley] Added sample data for ml.MovieLensALS example.  Changed spark.ml Java examples to use DataFrames API instead of sql()
      da81558 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into doc-review-1.3
      629dbf5 [Joseph K. Bradley] Updated based on code review: * made new page for old migration guides * small fixes * moved inherit_doc in python
      b9df7c4 [Joseph K. Bradley] Small cleanups: toDF to toDF(), adding s for string interpolation
      34b067f [Joseph K. Bradley] small doc correction
      da16aef [Joseph K. Bradley] Fixed python mllib docs
      8cce91c [Joseph K. Bradley] GMM: removed old imports, added some doc
      695f3f6 [Joseph K. Bradley] partly done trying to fix inherit_doc for class hierarchies in python docs
      a72c018 [Joseph K. Bradley] made ChiSqTestResult appear in python docs
      b05a80d [Joseph K. Bradley] organize imports. doc cleanups
      e572827 [Joseph K. Bradley] updated programming guide for ml and mllib
      4a17eedb
  13. Nov 21, 2014
    • Davies Liu's avatar
      [SPARK-4531] [MLlib] cache serialized java object · ce95bd8e
      Davies Liu authored
      The Pyrolite is pretty slow (comparing to the adhoc serializer in 1.1), it cause much performance regression in 1.2, because we cache the serialized Python object in JVM, deserialize them into Java object in each step.
      
      This PR change to cache the deserialized JavaRDD instead of PythonRDD to avoid the deserialization of Pyrolite. It should have similar memory usage as before, but much faster.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #3397 from davies/cache and squashes the following commits:
      
      7f6e6ce [Davies Liu] Update -> Updater
      4b52edd [Davies Liu] using named argument
      63b984e [Davies Liu] fix
      7da0332 [Davies Liu] add unpersist()
      dff33e1 [Davies Liu] address comments
      c2bdfc2 [Davies Liu] refactor
      d572f00 [Davies Liu] Merge branch 'master' into cache
      f1063e1 [Davies Liu] cache serialized java object
      ce95bd8e
  14. Nov 13, 2014
    • Xiangrui Meng's avatar
      [SPARK-4372][MLLIB] Make LR and SVM's default parameters consistent in Scala and Python · 32218307
      Xiangrui Meng authored
      The current default regParam is 1.0 and regType is claimed to be none in Python (but actually it is l2), while regParam = 0.0 and regType is L2 in Scala. We should make the default values consistent. This PR sets the default regType to L2 and regParam to 0.01. Note that the default regParam value in LIBLINEAR (and hence scikit-learn) is 1.0. However, we use average loss instead of total loss in our formulation. Hence regParam=1.0 is definitely too heavy.
      
      In LinearRegression, we set regParam=0.0 and regType=None, because we have separate classes for Lasso and Ridge, both of which use regParam=0.01 as the default.
      
      davies atalwalkar
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #3232 from mengxr/SPARK-4372 and squashes the following commits:
      
      9979837 [Xiangrui Meng] update Ridge/Lasso to use default regParam 0.01 cast input arguments
      d3ba096 [Xiangrui Meng] change 'none' back to None
      1909a6e [Xiangrui Meng] change default regParam to 0.01 and regType to L2 in LR and SVM
      32218307
  15. Nov 11, 2014
    • Davies Liu's avatar
      [SPARK-4324] [PySpark] [MLlib] support numpy.array for all MLlib API · 65083e93
      Davies Liu authored
      This PR check all of the existing Python MLlib API to make sure that numpy.array is supported as Vector (also RDD of numpy.array).
      
      It also improve some docstring and doctest.
      
      cc mateiz mengxr
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #3189 from davies/numpy and squashes the following commits:
      
      d5057c4 [Davies Liu] fix tests
      6987611 [Davies Liu] support numpy.array for all MLlib API
      65083e93
  16. Oct 31, 2014
    • Davies Liu's avatar
      [SPARK-4124] [MLlib] [PySpark] simplify serialization in MLlib Python API · 872fc669
      Davies Liu authored
      Create several helper functions to call MLlib Java API, convert the arguments to Java type and convert return value to Python object automatically, this simplify serialization in MLlib Python API very much.
      
      After this, the MLlib Python API does not need to deal with serialization details anymore, it's easier to add new API.
      
      cc mengxr
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #2995 from davies/cleanup and squashes the following commits:
      
      8fa6ec6 [Davies Liu] address comments
      16b85a0 [Davies Liu] Merge branch 'master' of github.com:apache/spark into cleanup
      43743e5 [Davies Liu] bugfix
      731331f [Davies Liu] simplify serialization in MLlib Python API
      872fc669
  17. Oct 16, 2014
    • Davies Liu's avatar
      [SPARK-3971] [MLLib] [PySpark] hotfix: Customized pickler should work in cluster mode · 091d32c5
      Davies Liu authored
      Customized pickler should be registered before unpickling, but in executor, there is no way to register the picklers before run the tasks.
      
      So, we need to register the picklers in the tasks itself, duplicate the javaToPython() and pythonToJava() in MLlib, call SerDe.initialize() before pickling or unpickling.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2830 from davies/fix_pickle and squashes the following commits:
      
      0c85fb9 [Davies Liu] revert the privacy change
      6b94e15 [Davies Liu] use JavaConverters instead of JavaConversions
      0f02050 [Davies Liu] hotfix: Customized pickler does not work in cluster
      091d32c5
  18. Oct 07, 2014
  19. Oct 06, 2014
    • cocoatomo's avatar
      [SPARK-3773][PySpark][Doc] Sphinx build warning · 2300eb58
      cocoatomo authored
      When building Sphinx documents for PySpark, we have 12 warnings.
      Their causes are almost docstrings in broken ReST format.
      
      To reproduce this issue, we should run following commands on the commit: 6e27cb63.
      
      ```bash
      $ cd ./python/docs
      $ make clean html
      ...
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of pyspark.SparkContext.sequenceFile:4: ERROR: Unexpected indentation.
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of pyspark.RDD.saveAsSequenceFile:4: ERROR: Unexpected indentation.
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:14: ERROR: Unexpected indentation.
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:16: WARNING: Definition list ends without a blank line; unexpected unindent.
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:17: WARNING: Block quote ends without a blank line; unexpected unindent.
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:14: ERROR: Unexpected indentation.
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:16: WARNING: Definition list ends without a blank line; unexpected unindent.
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:17: WARNING: Block quote ends without a blank line; unexpected unindent.
      /Users/<user>/MyRepos/Scala/spark/python/docs/pyspark.mllib.rst:50: WARNING: missing attribute mentioned in :members: or __all__: module pyspark.mllib.regression, attribute RidgeRegressionModelLinearRegressionWithSGD
      /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/tree.py:docstring of pyspark.mllib.tree.DecisionTreeModel.predict:3: ERROR: Unexpected indentation.
      ...
      checking consistency... /Users/<user>/MyRepos/Scala/spark/python/docs/modules.rst:: WARNING: document isn't included in any toctree
      ...
      copying static files... WARNING: html_static_path entry u'/Users/<user>/MyRepos/Scala/spark/python/docs/_static' does not exist
      ...
      build succeeded, 12 warnings.
      ```
      
      Author: cocoatomo <cocoatomo77@gmail.com>
      
      Closes #2653 from cocoatomo/issues/3773-sphinx-build-warnings and squashes the following commits:
      
      6f65661 [cocoatomo] [SPARK-3773][PySpark][Doc] Sphinx build warning
      2300eb58
    • Sandy Ryza's avatar
      [SPARK-2461] [PySpark] Add a toString method to GeneralizedLinearModel · 20ea54cc
      Sandy Ryza authored
      Add a toString method to GeneralizedLinearModel, also change `__str__` to `__repr__` for some classes, to provide better message in repr.
      
      This PR is based on #1388, thanks to sryza!
      
      closes #1388
      
      Author: Sandy Ryza <sandy@cloudera.com>
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2625 from davies/string and squashes the following commits:
      
      3544aad [Davies Liu] fix LinearModel
      0bcd642 [Davies Liu] Merge branch 'sandy-spark-2461' of github.com:sryza/spark
      1ce5c2d [Sandy Ryza] __repr__ back to __str__ in a couple places
      aa9e962 [Sandy Ryza] Switch __str__ to __repr__
      a0c5041 [Sandy Ryza] Add labels back in
      1aa17f5 [Sandy Ryza] Match existing conventions
      fac1bc4 [Sandy Ryza] Fix PEP8 error
      f7b58ed [Sandy Ryza] SPARK-2461. Add a toString method to GeneralizedLinearModel
      20ea54cc
  20. Sep 19, 2014
    • Davies Liu's avatar
      [SPARK-3491] [MLlib] [PySpark] use pickle to serialize data in MLlib · fce5e251
      Davies Liu authored
      Currently, we serialize the data between JVM and Python case by case manually, this cannot scale to support so many APIs in MLlib.
      
      This patch will try to address this problem by serialize the data using pickle protocol, using Pyrolite library to serialize/deserialize in JVM. Pickle protocol can be easily extended to support customized class.
      
      All the modules are refactored to use this protocol.
      
      Known issues: There will be some performance regression (both CPU and memory, the serialized data increased)
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2378 from davies/pickle_mllib and squashes the following commits:
      
      dffbba2 [Davies Liu] Merge branch 'master' of github.com:apache/spark into pickle_mllib
      810f97f [Davies Liu] fix equal of matrix
      032cd62 [Davies Liu] add more type check and conversion for user_product
      bd738ab [Davies Liu] address comments
      e431377 [Davies Liu] fix cache of rdd, refactor
      19d0967 [Davies Liu] refactor Picklers
      2511e76 [Davies Liu] cleanup
      1fccf1a [Davies Liu] address comments
      a2cc855 [Davies Liu] fix tests
      9ceff73 [Davies Liu] test size of serialized Rating
      44e0551 [Davies Liu] fix cache
      a379a81 [Davies Liu] fix pickle array in python2.7
      df625c7 [Davies Liu] Merge commit '154d141' into pickle_mllib
      154d141 [Davies Liu] fix autobatchedpickler
      44736d7 [Davies Liu] speed up pickling array in Python 2.7
      e1d1bfc [Davies Liu] refactor
      708dc02 [Davies Liu] fix tests
      9dcfb63 [Davies Liu] fix style
      88034f0 [Davies Liu] rafactor, address comments
      46a501e [Davies Liu] choose batch size automatically
      df19464 [Davies Liu] memorize the module and class name during pickleing
      f3506c5 [Davies Liu] Merge branch 'master' into pickle_mllib
      722dd96 [Davies Liu] cleanup _common.py
      0ee1525 [Davies Liu] remove outdated tests
      b02e34f [Davies Liu] remove _common.py
      84c721d [Davies Liu] Merge branch 'master' into pickle_mllib
      4d7963e [Davies Liu] remove muanlly serialization
      6d26b03 [Davies Liu] fix tests
      c383544 [Davies Liu] classification
      f2a0856 [Davies Liu] mllib/regression
      d9f691f [Davies Liu] mllib/util
      cccb8b1 [Davies Liu] mllib/tree
      8fe166a [Davies Liu] Merge branch 'pickle' into pickle_mllib
      aa2287e [Davies Liu] random
      f1544c4 [Davies Liu] refactor clustering
      52d1350 [Davies Liu] use new protocol in mllib/stat
      b30ef35 [Davies Liu] use pickle to serialize data for mllib/recommendation
      f44f771 [Davies Liu] enable tests about array
      3908f5c [Davies Liu] Merge branch 'master' into pickle
      c77c87b [Davies Liu] cleanup debugging code
      60e4e2f [Davies Liu] support unpickle array.array for Python 2.6
      fce5e251
  21. Sep 03, 2014
    • Davies Liu's avatar
      [SPARK-3309] [PySpark] Put all public API in __all__ · 6481d274
      Davies Liu authored
      Put all public API in __all__, also put them all in pyspark.__init__.py, then we can got all the documents for public API by `pydoc pyspark`. It also can be used by other programs (such as Sphinx or Epydoc) to generate only documents for public APIs.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2205 from davies/public and squashes the following commits:
      
      c6c5567 [Davies Liu] fix message
      f7b35be [Davies Liu] put SchemeRDD, Row in pyspark.sql module
      7e3016a [Davies Liu] add __all__ in mllib
      6281b48 [Davies Liu] fix doc for SchemaRDD
      6caab21 [Davies Liu] add public interfaces into pyspark.__init__.py
      6481d274
  22. Aug 06, 2014
    • Nicholas Chammas's avatar
      [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically · d614967b
      Nicholas Chammas authored
      As described in [SPARK-2627](https://issues.apache.org/jira/browse/SPARK-2627), we'd like Python code to automatically be checked for PEP 8 compliance by Jenkins. This pull request aims to do that.
      
      Notes:
      * We may need to install [`pep8`](https://pypi.python.org/pypi/pep8) on the build server.
      * I'm expecting tests to fail now that PEP 8 compliance is being checked as part of the build. I'm fine with cleaning up any remaining PEP 8 violations as part of this pull request.
      * I did not understand why the RAT and scalastyle reports are saved to text files. I did the same for the PEP 8 check, but only so that the console output style can match those for the RAT and scalastyle checks. The PEP 8 report is removed right after the check is complete.
      * Updates to the ["Contributing to Spark"](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark) guide will be submitted elsewhere, as I don't believe that text is part of the Spark repo.
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      Author: nchammas <nicholas.chammas@gmail.com>
      
      Closes #1744 from nchammas/master and squashes the following commits:
      
      274b238 [Nicholas Chammas] [SPARK-2627] [PySpark] minor indentation changes
      983d963 [nchammas] Merge pull request #5 from apache/master
      1db5314 [nchammas] Merge pull request #4 from apache/master
      0e0245f [Nicholas Chammas] [SPARK-2627] undo erroneous whitespace fixes
      bf30942 [Nicholas Chammas] [SPARK-2627] PEP8: comment spacing
      6db9a44 [nchammas] Merge pull request #3 from apache/master
      7b4750e [Nicholas Chammas] merge upstream changes
      91b7584 [Nicholas Chammas] [SPARK-2627] undo unnecessary line breaks
      44e3e56 [Nicholas Chammas] [SPARK-2627] use tox.ini to exclude files
      b09fae2 [Nicholas Chammas] don't wrap comments unnecessarily
      bfb9f9f [Nicholas Chammas] [SPARK-2627] keep up with the PEP 8 fixes
      9da347f [nchammas] Merge pull request #2 from apache/master
      aa5b4b5 [Nicholas Chammas] [SPARK-2627] follow Spark bash style for if blocks
      d0a83b9 [Nicholas Chammas] [SPARK-2627] check that pep8 downloaded fine
      dffb5dd [Nicholas Chammas] [SPARK-2627] download pep8 at runtime
      a1ce7ae [Nicholas Chammas] [SPARK-2627] space out test report sections
      21da538 [Nicholas Chammas] [SPARK-2627] it's PEP 8, not PEP8
      6f4900b [Nicholas Chammas] [SPARK-2627] more misc PEP 8 fixes
      fe57ed0 [Nicholas Chammas] removing merge conflict backups
      9c01d4c [nchammas] Merge pull request #1 from apache/master
      9a66cb0 [Nicholas Chammas] resolving merge conflicts
      a31ccc4 [Nicholas Chammas] [SPARK-2627] miscellaneous PEP 8 fixes
      beaa9ac [Nicholas Chammas] [SPARK-2627] fail check on non-zero status
      723ed39 [Nicholas Chammas] always delete the report file
      0541ebb [Nicholas Chammas] [SPARK-2627] call Python linter from run-tests
      12440fa [Nicholas Chammas] [SPARK-2627] add Scala linter
      61c07b9 [Nicholas Chammas] [SPARK-2627] add Python linter
      75ad552 [Nicholas Chammas] make check output style consistent
      d614967b
  23. Aug 01, 2014
    • Michael Giannakopoulos's avatar
      [SPARK-2550][MLLIB][APACHE SPARK] Support regularization and intercept in pyspark's linear methods. · c2811892
      Michael Giannakopoulos authored
      Related to issue: [SPARK-2550](https://issues.apache.org/jira/browse/SPARK-2550?jql=project%20%3D%20SPARK%20AND%20resolution%20%3D%20Unresolved%20AND%20priority%20%3D%20Major%20ORDER%20BY%20key%20DESC).
      
      Author: Michael Giannakopoulos <miccagiann@gmail.com>
      
      Closes #1624 from miccagiann/new-branch and squashes the following commits:
      
      c02e5f5 [Michael Giannakopoulos] Merge cleanly with upstream/master.
      8dcb888 [Michael Giannakopoulos] Putting the if/else if statements in brackets.
      fed8eaa [Michael Giannakopoulos] Adding a space in the message related to the IllegalArgumentException.
      44e6ff0 [Michael Giannakopoulos] Adding a blank line before python class LinearRegressionWithSGD.
      8eba9c5 [Michael Giannakopoulos] Change function signatures. Exception is thrown from the scala component and not from the python one.
      638be47 [Michael Giannakopoulos] Modified code to comply with code standards.
      ec50ee9 [Michael Giannakopoulos] Shorten the if-elif-else statement in regression.py file
      b962744 [Michael Giannakopoulos] Replaced the enum classes, with strings-keywords for defining the values of 'regType' parameter.
      78853ec [Michael Giannakopoulos] Providing intercept and regualizer functionallity for linear methods in only one function.
      3ac8874 [Michael Giannakopoulos] Added support for regularizer and intercection parameters for linear regression method.
      c2811892
  24. Jun 04, 2014
    • Xiangrui Meng's avatar
      [SPARK-1752][MLLIB] Standardize text format for vectors and labeled points · 189df165
      Xiangrui Meng authored
      We should standardize the text format used to represent vectors and labeled points. The proposed formats are the following:
      
      1. dense vector: `[v0,v1,..]`
      2. sparse vector: `(size,[i0,i1],[v0,v1])`
      3. labeled point: `(label,vector)`
      
      where "(..)" indicates a tuple and "[...]" indicate an array. `loadLabeledPoints` is added to pyspark's `MLUtils`. I didn't add `loadVectors` to pyspark because `RDD.saveAsTextFile` cannot stringify dense vectors in the proposed format automatically.
      
      `MLUtils#saveLabeledData` and `MLUtils#loadLabeledData` are deprecated. Users should use `RDD#saveAsTextFile` and `MLUtils#loadLabeledPoints` instead. In Scala, `MLUtils#loadLabeledPoints` is compatible with the format used by `MLUtils#loadLabeledData`.
      
      CC: @mateiz, @srowen
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #685 from mengxr/labeled-io and squashes the following commits:
      
      2d1116a [Xiangrui Meng] make loadLabeledData/saveLabeledData deprecated since 1.0.1
      297be75 [Xiangrui Meng] change LabeledPoint.parse to LabeledPointParser.parse to maintain binary compatibility
      d6b1473 [Xiangrui Meng] Merge branch 'master' into labeled-io
      56746ea [Xiangrui Meng] replace # by .
      623a5f0 [Xiangrui Meng] merge master
      f06d5ba [Xiangrui Meng] add docs and minor updates
      640fe0c [Xiangrui Meng] throw SparkException
      5bcfbc4 [Xiangrui Meng] update test to add scientific notations
      e86bf38 [Xiangrui Meng] remove NumericTokenizer
      050fca4 [Xiangrui Meng] use StringTokenizer
      6155b75 [Xiangrui Meng] merge master
      f644438 [Xiangrui Meng] remove parse methods based on eval from pyspark
      a41675a [Xiangrui Meng] python loadLabeledPoint uses Scala's implementation
      ce9a475 [Xiangrui Meng] add deserialize_labeled_point to pyspark with tests
      e9fcd49 [Xiangrui Meng] add serializeLabeledPoint and tests
      aea4ae3 [Xiangrui Meng] minor updates
      810d6df [Xiangrui Meng] update tokenizer/parser implementation
      7aac03a [Xiangrui Meng] remove Scala parsers
      c1885c1 [Xiangrui Meng] add headers and minor changes
      b0c50cb [Xiangrui Meng] add customized parser
      d731817 [Xiangrui Meng] style update
      63dc396 [Xiangrui Meng] add loadLabeledPoints to pyspark
      ea122b5 [Xiangrui Meng] Merge branch 'master' into labeled-io
      cd6c78f [Xiangrui Meng] add __str__ and parse to LabeledPoint
      a7a178e [Xiangrui Meng] add stringify to pyspark's Vectors
      5c2dbfa [Xiangrui Meng] add parse to pyspark's Vectors
      7853f88 [Xiangrui Meng] update pyspark's SparseVector.__str__
      e761d32 [Xiangrui Meng] make LabelPoint.parse compatible with the dense format used before v1.0 and deprecate loadLabeledData and saveLabeledData
      9e63a02 [Xiangrui Meng] add loadVectors and loadLabeledPoints
      19aa523 [Xiangrui Meng] update toString and add parsers for Vectors and LabeledPoint
      189df165
  25. May 25, 2014
    • Reynold Xin's avatar
      Fix PEP8 violations in Python mllib. · d33d3c61
      Reynold Xin authored
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #871 from rxin/mllib-pep8 and squashes the following commits:
      
      848416f [Reynold Xin] Fixed a typo in the previous cleanup (c -> sc).
      a8db4cd [Reynold Xin] Fix PEP8 violations in Python mllib.
      d33d3c61
  26. Apr 15, 2014
    • Matei Zaharia's avatar
      [WIP] SPARK-1430: Support sparse data in Python MLlib · 63ca581d
      Matei Zaharia authored
      This PR adds a SparseVector class in PySpark and updates all the regression, classification and clustering algorithms and models to support sparse data, similar to MLlib. I chose to add this class because SciPy is quite difficult to install in many environments (more so than NumPy), but I plan to add support for SciPy sparse vectors later too, and make the methods work transparently on objects of either type.
      
      On the Scala side, we keep Python sparse vectors sparse and pass them to MLlib. We always return dense vectors from our models.
      
      Some to-do items left:
      - [x] Support SciPy's scipy.sparse matrix objects when SciPy is available. We can easily add a function to convert these to our own SparseVector.
      - [x] MLlib currently uses a vector with one extra column on the left to represent what we call LabeledPoint in Scala. Do we really want this? It may get annoying once you deal with sparse data since you must add/subtract 1 to each feature index when training. We can remove this API in 1.0 and use tuples for labeling.
      - [x] Explain how to use these in the Python MLlib docs.
      
      CC @mengxr, @joshrosen
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #341 from mateiz/py-ml-update and squashes the following commits:
      
      d52e763 [Matei Zaharia] Remove no-longer-needed slice code and handle review comments
      ea5a25a [Matei Zaharia] Fix remaining uses of copyto() after merge
      b9f97a3 [Matei Zaharia] Fix test
      1e1bd0f [Matei Zaharia] Add MLlib logistic regression example in Python
      88bc01f [Matei Zaharia] Clean up inheritance of LinearModel in Python, and expose its parametrs
      37ab747 [Matei Zaharia] Fix some examples and docs due to changes in MLlib API
      da0f27e [Matei Zaharia] Added a MLlib K-means example and updated docs to discuss sparse data
      c48e85a [Matei Zaharia] Added some tests for passing lists as input, and added mllib/tests.py to run-tests script.
      a07ba10 [Matei Zaharia] Fix some typos and calculation of initial weights
      74eefe7 [Matei Zaharia] Added LabeledPoint class in Python
      889dde8 [Matei Zaharia] Support scipy.sparse matrices in all our algorithms and models
      ab244d1 [Matei Zaharia] Allow SparseVectors to be initialized using a dict
      a5d6426 [Matei Zaharia] Add linalg.py to run-tests script
      0e7a3d8 [Matei Zaharia] Keep vectors sparse in Java when reading LabeledPoints
      eaee759 [Matei Zaharia] Update regression, classification and clustering models for sparse data
      2abbb44 [Matei Zaharia] Further work to get linear models working with sparse data
      154f45d [Matei Zaharia] Update docs, name some magic values
      881fef7 [Matei Zaharia] Added a sparse vector in Python and made Java-Python format more compact
      63ca581d
  27. Jan 12, 2014
    • Matei Zaharia's avatar
      Update some Python MLlib parameters to use camelCase, and tweak docs · 4c28a2ba
      Matei Zaharia authored
      We've used camel case in other Spark methods so it felt reasonable to
      keep using it here and make the code match Scala/Java as much as
      possible. Note that parameter names matter in Python because it allows
      passing optional parameters by name.
      4c28a2ba
    • Matei Zaharia's avatar
      Add Naive Bayes to Python MLlib, and some API fixes · 9a0dfdf8
      Matei Zaharia authored
      - Added a Python wrapper for Naive Bayes
      - Updated the Scala Naive Bayes to match the style of our other
        algorithms better and in particular make it easier to call from Java
        (added builder pattern, removed default value in train method)
      - Updated Python MLlib functions to not require a SparkContext; we can
        get that from the RDD the user gives
      - Added a toString method in LabeledPoint
      - Made the Python MLlib tests run as part of run-tests as well (before
        they could only be run individually through each file)
      9a0dfdf8
  28. Dec 24, 2013
Loading