Skip to content
Snippets Groups Projects
  1. Apr 12, 2016
  2. Apr 08, 2016
    • Joseph K. Bradley's avatar
      [SPARK-14498][ML][PYTHON][SQL] Many cleanups to ML and ML-related docs · d7af736b
      Joseph K. Bradley authored
      ## What changes were proposed in this pull request?
      
      Cleanups to documentation.  No changes to code.
      * GBT docs: Move Scala doc for private object GradientBoostedTrees to public docs for GBTClassifier,Regressor
      * GLM regParam: needs doc saying it is for L2 only
      * TrainValidationSplitModel: add .. versionadded:: 2.0.0
      * Rename “_transformer_params_from_java” to “_transfer_params_from_java”
      * LogReg Summary classes: “probability” col should not say “calibrated”
      * LR summaries: coefficientStandardErrors —> document that intercept stderr comes last.  Same for t,p-values
      * approxCountDistinct: Document meaning of “rsd" argument.
      * LDA: note which params are for online LDA only
      
      ## How was this patch tested?
      
      Doc build
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #12266 from jkbradley/ml-doc-cleanups.
      d7af736b
    • wm624@hotmail.com's avatar
      [SPARK-12569][PYSPARK][ML] DecisionTreeRegressor: provide variance of prediction: Python API · e0ad75f2
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      A new column VarianceCol has been added to DecisionTreeRegressor in ML scala code.
      
      This patch adds the corresponding Python API, HasVarianceCol, to class DecisionTreeRegressor.
      
      ## How was this patch tested?
      ./dev/lint-python
      PEP8 checks passed.
      rm -rf _build/*
      pydoc checks passed.
      
      ./python/run-tests --python-executables=python2.7 --modules=pyspark-ml
      Running PySpark tests. Output is in /Users/mwang/spark_ws_0904/python/unit-tests.log
      Will test against the following Python executables: ['python2.7']
      Will test the following Python modules: ['pyspark-ml']
      Finished test(python2.7): pyspark.ml.evaluation (12s)
      Finished test(python2.7): pyspark.ml.clustering (18s)
      Finished test(python2.7): pyspark.ml.classification (30s)
      Finished test(python2.7): pyspark.ml.recommendation (28s)
      Finished test(python2.7): pyspark.ml.feature (43s)
      Finished test(python2.7): pyspark.ml.regression (31s)
      Finished test(python2.7): pyspark.ml.tuning (19s)
      Finished test(python2.7): pyspark.ml.tests (34s)
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #12116 from wangmiao1981/fix_api.
      e0ad75f2
    • Kai Jiang's avatar
      [SPARK-14373][PYSPARK] PySpark RandomForestClassifier, Regressor support export/import · e5d8d6e0
      Kai Jiang authored
      ## What changes were proposed in this pull request?
      supporting `RandomForest{Classifier, Regressor}` save/load for Python API.
      [JIRA](https://issues.apache.org/jira/browse/SPARK-14373)
      ## How was this patch tested?
      doctest
      
      Author: Kai Jiang <jiangkai@gmail.com>
      
      Closes #12238 from vectorijk/spark-14373.
      e5d8d6e0
  3. Apr 06, 2016
    • Bryan Cutler's avatar
      [SPARK-13430][PYSPARK][ML] Python API for training summaries of linear and logistic regression · 9c6556c5
      Bryan Cutler authored
      ## What changes were proposed in this pull request?
      
      Adding Python API for training summaries of LogisticRegression and LinearRegression in PySpark ML.
      
      ## How was this patch tested?
      Added unit tests to exercise the api calls for the summary classes.  Also, manually verified values are expected and match those from Scala directly.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #11621 from BryanCutler/pyspark-ml-summary-SPARK-13430.
      9c6556c5
  4. Mar 31, 2016
    • sethah's avatar
      [SPARK-14264][PYSPARK][ML] Add feature importance for GBTs in pyspark · b11887c0
      sethah authored
      ## What changes were proposed in this pull request?
      
      Feature importances are exposed in the python API for GBTs.
      
      Other changes:
      * Update the random forest feature importance documentation to not repeat decision tree docstring and instead place a reference to it.
      
      ## How was this patch tested?
      
      Python doc tests were updated to validate GBT feature importance.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #12056 from sethah/Pyspark_GBT_feature_importance.
      b11887c0
  5. Mar 24, 2016
    • GayathriMurali's avatar
      [SPARK-13949][ML][PYTHON] PySpark ml DecisionTreeClassifier, Regressor support export/import · 0874ff3a
      GayathriMurali authored
      ## What changes were proposed in this pull request?
      
      Added MLReadable and MLWritable to Decision Tree Classifier and Regressor. Added doctests.
      
      ## How was this patch tested?
      
      Python Unit tests. Tests added to check persistence in DecisionTreeClassifier and DecisionTreeRegressor.
      
      Author: GayathriMurali <gayathri.m.softie@gmail.com>
      
      Closes #11892 from GayathriMurali/SPARK-13949.
      0874ff3a
    • sethah's avatar
      [SPARK-14107][PYSPARK][ML] Add seed as named argument to GBTs in pyspark · 58509771
      sethah authored
      ## What changes were proposed in this pull request?
      
      GBTs in pyspark previously had seed parameters, but they could not be passed as keyword arguments through the class constructor. This patch adds seed as a keyword argument and also sets default value.
      
      ## How was this patch tested?
      
      Doc tests were updated to pass a random seed through the GBTClassifier and GBTRegressor constructors.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #11944 from sethah/SPARK-14107.
      58509771
  6. Mar 23, 2016
    • sethah's avatar
      [SPARK-13068][PYSPARK][ML] Type conversion for Pyspark params · 30bdb5cb
      sethah authored
      ## What changes were proposed in this pull request?
      
      This patch adds type conversion functionality for parameters in Pyspark. A `typeConverter` field is added to the constructor of `Param` class. This argument is a function which converts values passed to this param to the appropriate type if possible. This is beneficial so that the params can fail at set time if they are given inappropriate values, but even more so because coherent error messages are now provided when Py4J cannot cast the python type to the appropriate Java type.
      
      This patch also adds a `TypeConverters` class with factory methods for common type conversions. Most of the changes involve adding these factory type converters to existing params. The previous solution to this issue, `expectedType`, is deprecated and can be removed in 2.1.0 as discussed on the Jira.
      
      ## How was this patch tested?
      
      Unit tests were added in python/pyspark/ml/tests.py to test parameter type conversion. These tests check that values that should be convertible are converted correctly, and that the appropriate errors are thrown when invalid values are provided.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #11663 from sethah/SPARK-13068-tc.
      30bdb5cb
  7. Mar 22, 2016
    • Joseph K. Bradley's avatar
      [SPARK-13951][ML][PYTHON] Nested Pipeline persistence · 7e3423b9
      Joseph K. Bradley authored
      Adds support for saving and loading nested ML Pipelines from Python.  Pipeline and PipelineModel do not extend JavaWrapper, but they are able to utilize the JavaMLWriter, JavaMLReader implementations.
      
      Also:
      * Separates out interfaces from Java wrapper implementations for MLWritable, MLReadable, MLWriter, MLReader.
      * Moves methods _stages_java2py, _stages_py2java into Pipeline, PipelineModel as _transfer_stage_from_java, _transfer_stage_to_java
      
      Added new unit test for nested Pipelines.  Abstracted validity check into a helper method for the 2 unit tests.
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #11866 from jkbradley/nested-pipeline-io.
      Closes #11835
      7e3423b9
  8. Mar 11, 2016
    • sethah's avatar
      [SPARK-13787][ML][PYSPARK] Pyspark feature importances for decision tree and random forest · 234f781a
      sethah authored
      ## What changes were proposed in this pull request?
      
      This patch adds a `featureImportance` property to the Pyspark API for `DecisionTreeRegressionModel`, `DecisionTreeClassificationModel`, `RandomForestRegressionModel` and `RandomForestClassificationModel`.
      
      ## How was this patch tested?
      
      Python doc tests for the affected classes were updated to check feature importances.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #11622 from sethah/SPARK-13787.
      234f781a
  9. Feb 25, 2016
    • Tommy YU's avatar
      [SPARK-13033] [ML] [PYSPARK] Add import/export for ml.regression · f3be369e
      Tommy YU authored
      Add export/import for all estimators and transformers(which have Scala implementation) under pyspark/ml/regression.py.
      
      yanboliang Please help to review.
      For doctest, I though it's enough to add one since it's common usage. But I can add to all if we want it.
      
      Author: Tommy YU <tummyyu@163.com>
      
      Closes #11000 from Wenpei/spark-13033-ml.regression-exprot-import and squashes the following commits:
      
      3646b36 [Tommy YU] address review comments
      9cddc98 [Tommy YU] change base on review and pr 11197
      cc61d9d [Tommy YU] remove default parameter set
      19535d4 [Tommy YU] add export/import to regression
      44a9dc2 [Tommy YU] add import/export for ml.regression
      f3be369e
  10. Feb 20, 2016
  11. Jan 29, 2016
    • Yanbo Liang's avatar
      [SPARK-13032][ML][PYSPARK] PySpark support model export/import and take LinearRegression as example · e51b6eaa
      Yanbo Liang authored
      * Implement ```MLWriter/MLWritable/MLReader/MLReadable``` for PySpark.
      * Making ```LinearRegression``` to support ```save/load``` as example. After this merged, the work for other transformers/estimators will be easy, then we can list and distribute the tasks to the community.
      
      cc mengxr jkbradley
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #10469 from yanboliang/spark-11939.
      e51b6eaa
  12. Jan 26, 2016
    • Holden Karau's avatar
      [SPARK-10509][PYSPARK] Reduce excessive param boiler plate code · eb917291
      Holden Karau authored
      The current python ml params require cut-and-pasting the param setup and description between the class & ```__init__``` methods. Remove this possible case of errors & simplify use of custom params by adding a ```_copy_new_parent``` method to param so as to avoid cut and pasting (and cut and pasting at different indentation levels urgh).
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #10216 from holdenk/SPARK-10509-excessive-param-boiler-plate-code.
      eb917291
  13. Jan 06, 2016
    • Yanbo Liang's avatar
      [SPARK-11815][ML][PYSPARK] PySpark DecisionTreeClassifier &... · 3aa34882
      Yanbo Liang authored
      [SPARK-11815][ML][PYSPARK] PySpark DecisionTreeClassifier & DecisionTreeRegressor should support setSeed
      
      PySpark ```DecisionTreeClassifier``` & ```DecisionTreeRegressor``` should support ```setSeed``` like what we do at Scala side.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #9807 from yanboliang/spark-11815.
      3aa34882
  14. Dec 03, 2015
  15. Nov 18, 2015
  16. Nov 05, 2015
  17. Nov 02, 2015
  18. Oct 28, 2015
  19. Oct 27, 2015
  20. Oct 07, 2015
  21. Oct 06, 2015
  22. Sep 17, 2015
  23. Sep 11, 2015
    • Yanbo Liang's avatar
      [SPARK-10026] [ML] [PySpark] Implement some common Params for regression in PySpark · b656e613
      Yanbo Liang authored
      LinearRegression and LogisticRegression lack of some Params for Python, and some Params are not shared classes which lead we need to write them for each class. These kinds of Params are list here:
      ```scala
      HasElasticNetParam
      HasFitIntercept
      HasStandardization
      HasThresholds
      ```
      Here we implement them in shared params at Python side and make LinearRegression/LogisticRegression parameters peer with Scala one.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8508 from yanboliang/spark-10026.
      b656e613
  24. Jul 07, 2015
    • MechCoder's avatar
      [SPARK-8711] [ML] Add additional methods to PySpark ML tree models · 1dbc4a15
      MechCoder authored
      Add numNodes and depth to treeModels, add treeWeights to ensemble Models.
      Add __repr__ to all models.
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #7095 from MechCoder/missing_methods_tree and squashes the following commits:
      
      23b08be [MechCoder] private [spark]
      38a0860 [MechCoder] rename pyTreeWeights to javaTreeWeights
      6d16ad8 [MechCoder] Fix Python 3 Error
      47d7023 [MechCoder] Use np.allclose and treeEnsembleModel -> TreeEnsembleMethods
      819098c [MechCoder] [SPARK-8711] [ML] Add additional methods ot PySpark ML tree models
      1dbc4a15
  25. May 20, 2015
    • Holden Karau's avatar
      [SPARK-7511] [MLLIB] pyspark ml seed param should be random by default or 42... · 191ee474
      Holden Karau authored
      [SPARK-7511] [MLLIB] pyspark ml seed param should be random by default or 42 is quite funny but not very random
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #6139 from holdenk/SPARK-7511-pyspark-ml-seed-param-should-be-random-by-default-or-42-is-quite-funny-but-not-very-random and squashes the following commits:
      
      591f8e5 [Holden Karau] specify old seed for doc tests
      2470004 [Holden Karau] Fix a bunch of seeds with default values to have None as the default which will then result in using the hash of the class name
      cbad96d [Holden Karau] Add the setParams function that is used in the real code
      423b8d7 [Holden Karau] Switch the test code to behave slightly more like production code. also don't check the param map value only check for key existence
      140d25d [Holden Karau] remove extra space
      926165a [Holden Karau] Add some missing newlines for pep8 style
      8616751 [Holden Karau] merge in master
      58532e6 [Holden Karau] its the __name__ method, also treat None values as not set
      56ef24a [Holden Karau] fix test and regenerate base
      afdaa5c [Holden Karau] make sure different classes have different results
      68eb528 [Holden Karau] switch default seed to hash of type of self
      89c4611 [Holden Karau] Merge branch 'master' into SPARK-7511-pyspark-ml-seed-param-should-be-random-by-default-or-42-is-quite-funny-but-not-very-random
      31cd96f [Holden Karau] specify the seed to randomforestregressor test
      e1b947f [Holden Karau] Style fixes
      ce90ec8 [Holden Karau] merge in master
      bcdf3c9 [Holden Karau] update docstring seeds to none and some other default seeds from 42
      65eba21 [Holden Karau] pep8 fixes
      0e3797e [Holden Karau] Make seed default to random in more places
      213a543 [Holden Karau] Simplify the generated code to only include set default if there is a default rather than having None is note None in the generated code
      1ff17c2 [Holden Karau] Make the seed random for HasSeed in python
      191ee474
  26. May 18, 2015
    • Xiangrui Meng's avatar
      [SPARK-7380] [MLLIB] pipeline stages should be copyable in Python · 9c7e802a
      Xiangrui Meng authored
      This PR makes pipeline stages in Python copyable and hence simplifies some implementations. It also includes the following changes:
      
      1. Rename `paramMap` and `defaultParamMap` to `_paramMap` and `_defaultParamMap`, respectively.
      2. Accept a list of param maps in `fit`.
      3. Use parent uid and name to identify param.
      
      jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #6088 from mengxr/SPARK-7380 and squashes the following commits:
      
      413c463 [Xiangrui Meng] remove unnecessary doc
      4159f35 [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into SPARK-7380
      611c719 [Xiangrui Meng] fix python style
      68862b8 [Xiangrui Meng] update _java_obj initialization
      927ad19 [Xiangrui Meng] fix ml/tests.py
      0138fc3 [Xiangrui Meng] update feature transformers and fix a bug in RegexTokenizer
      9ca44fb [Xiangrui Meng] simplify Java wrappers and add tests
      c7d84ef [Xiangrui Meng] update ml/tests.py to test copy params
      7e0d27f [Xiangrui Meng] merge master
      46840fb [Xiangrui Meng] update wrappers
      b6db1ed [Xiangrui Meng] update all self.paramMap to self._paramMap
      46cb6ed [Xiangrui Meng] merge master
      a163413 [Xiangrui Meng] fix style
      1042e80 [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into SPARK-7380
      9630eae [Xiangrui Meng] fix Identifiable._randomUID
      13bd70a [Xiangrui Meng] update ml/tests.py
      64a536c [Xiangrui Meng] use _fit/_transform/_evaluate to simplify the impl
      02abf13 [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into copyable-python
      66ce18c [Joseph K. Bradley] some cleanups before sending to Xiangrui
      7431272 [Joseph K. Bradley] Rebased with master
      9c7e802a
  27. May 14, 2015
    • Xiangrui Meng's avatar
      [SPARK-7619] [PYTHON] fix docstring signature · 48fc38f5
      Xiangrui Meng authored
      Just realized that we need `\` at the end of the docstring. brkyvz
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6161 from mengxr/SPARK-7619 and squashes the following commits:
      
      e44495f [Xiangrui Meng] fix docstring signature
      48fc38f5
    • Xiangrui Meng's avatar
      [SPARK-7648] [MLLIB] Add weights and intercept to GLM wrappers in spark.ml · 723853ed
      Xiangrui Meng authored
      Otherwise, users can only use `transform` on the models. brkyvz
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6156 from mengxr/SPARK-7647 and squashes the following commits:
      
      1ae3d2d [Xiangrui Meng] add weights and intercept to LogisticRegression in Python
      f49eb46 [Xiangrui Meng] add weights and intercept to LinearRegressionModel
      723853ed
  28. May 12, 2015
    • Burak Yavuz's avatar
      [SPARK-7487] [ML] Feature Parity in PySpark for ml.regression · 8e935b0a
      Burak Yavuz authored
      Added LinearRegression Python API
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #6016 from brkyvz/ml-reg and squashes the following commits:
      
      11c9ef9 [Burak Yavuz] address comments
      1027a40 [Burak Yavuz] fix typo
      4c699ad [Burak Yavuz] added tree regressor api
      8afead2 [Burak Yavuz] made mixin for DT
      fa51c74 [Burak Yavuz] save additions
      0640d48 [Burak Yavuz] added ml.regression
      82aac48 [Burak Yavuz] added linear regression
      8e935b0a
Loading