Skip to content
Snippets Groups Projects
  1. Apr 14, 2016
    • Reynold Xin's avatar
      [SPARK-14625] TaskUIData and ExecutorUIData shouldn't be case classes · de2ad528
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      I was trying to understand the accumulator and metrics update source code and these two classes don't really need to be case classes. It would also be more consistent with other UI classes if they are not case classes. This is part of my bigger effort to simplify accumulators and task metrics.
      
      ## How was this patch tested?
      This is a straightforward refactoring without behavior change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12386 from rxin/SPARK-14625.
      de2ad528
    • gatorsmile's avatar
      [SPARK-14125][SQL] Native DDL Support: Alter View · 0d22092c
      gatorsmile authored
      #### What changes were proposed in this pull request?
      This PR is to provide a native DDL support for the following three Alter View commands:
      
      Based on the Hive DDL document:
      https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
      ##### 1. ALTER VIEW RENAME
      **Syntax:**
      ```SQL
      ALTER VIEW view_name RENAME TO new_view_name
      ```
      - to change the name of a view to a different name
      - not allowed to rename a view's name by ALTER TABLE
      
      ##### 2. ALTER VIEW SET TBLPROPERTIES
      **Syntax:**
      ```SQL
      ALTER VIEW view_name SET TBLPROPERTIES ('comment' = new_comment);
      ```
      - to add metadata to a view
      - not allowed to set views' properties by ALTER TABLE
      - ignore it if trying to set a view's existing property key when the value is the same
      - overwrite the value if trying to set a view's existing key to a different value
      
      ##### 3. ALTER VIEW UNSET TBLPROPERTIES
      **Syntax:**
      ```SQL
      ALTER VIEW view_name UNSET TBLPROPERTIES [IF EXISTS] ('comment', 'key')
      ```
      - to remove metadata from a view
      - not allowed to unset views' properties by ALTER TABLE
      - issue an exception if trying to unset a view's non-existent key
      
      #### How was this patch tested?
      Added test cases to verify if it works properly.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      Author: xiaoli <lixiao1983@gmail.com>
      Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local>
      
      Closes #12324 from gatorsmile/alterView.
      0d22092c
    • Dhruve Ashar's avatar
      [SPARK-14572][DOC] Update config docs to allow -Xms in extraJavaOptions · f83ba454
      Dhruve Ashar authored
      ## What changes were proposed in this pull request?
      The configuration docs are updated to reflect the changes introduced with [SPARK-12384](https://issues.apache.org/jira/browse/SPARK-12384). This allows the user to specify initial heap memory settings through the extraJavaOptions for executor, driver and am.
      
      ## How was this patch tested?
      The changes are tested in [SPARK-12384](https://issues.apache.org/jira/browse/SPARK-12384). This is just documenting the changes made.
      
      Author: Dhruve Ashar <dhruveashar@gmail.com>
      
      Closes #12333 from dhruve/doc/SPARK-14572.
      f83ba454
    • gatorsmile's avatar
      [SPARK-14518][SQL] Support Comment in CREATE VIEW · 3cf3db17
      gatorsmile authored
      #### What changes were proposed in this pull request?
      **HQL Syntax**: [Create View](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/Drop/AlterView
      )
      ```SQL
      CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ]
        [COMMENT view_comment]
        [TBLPROPERTIES (property_name = property_value, ...)]
        AS SELECT ...;
      ```
      Add a support for the `[COMMENT view_comment]` clause
      
      #### How was this patch tested?
      Modified the existing test cases to verify the correctness.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      Author: xiaoli <lixiao1983@gmail.com>
      Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local>
      
      Closes #12288 from gatorsmile/addCommentInCreateView.
      3cf3db17
    • hyukjinkwon's avatar
      [MINOR][SQL] Remove extra anonymous closure within functional transformations · 6fc3dc88
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR removes extra anonymous closure within functional transformations.
      
      For example,
      
      ```scala
      .map(item => {
        ...
      })
      ```
      
      which can be just simply as below:
      
      ```scala
      .map { item =>
        ...
      }
      ```
      
      ## How was this patch tested?
      
      Related unit tests and `sbt scalastyle`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #12382 from HyukjinKwon/minor-extra-closers.
      6fc3dc88
    • Holden Karau's avatar
      [SPARK-14573][PYSPARK][BUILD] Fix PyDoc Makefile & highlighting issues · 478af2f4
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      The PyDoc Makefile used "=" rather than "?=" for setting env variables so it overwrote the user values. This ignored the environment variables we set for linting allowing warnings through. This PR also fixes the warnings that had been introduced.
      
      ## How was this patch tested?
      
      manual local export & make
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #12336 from holdenk/SPARK-14573-fix-pydoc-makefile.
      478af2f4
    • hyukjinkwon's avatar
      [SPARK-14596][SQL] Remove not used SqlNewHadoopRDD and some more unused imports · b4819404
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Old `HadoopFsRelation` API includes `buildInternalScan()` which uses `SqlNewHadoopRDD` in `ParquetRelation`.
      Because now the old API is removed, `SqlNewHadoopRDD` is not used anymore.
      
      So, this PR removes `SqlNewHadoopRDD` and several unused imports.
      
      This was discussed in https://github.com/apache/spark/pull/12326.
      
      ## How was this patch tested?
      
      Several related existing unit tests and `sbt scalastyle`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #12354 from HyukjinKwon/SPARK-14596.
      b4819404
  2. Apr 13, 2016
    • Davies Liu's avatar
      [SPARK-14607] [SPARK-14484] [SQL] fix case-insensitive predicates in FileSourceStrategy · 62b7f306
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      When prune the partitions or push down predicates, case-sensitivity is not respected. In order to make it work with case-insensitive, this PR update the AttributeReference inside predicate to use the name from schema.
      
      ## How was this patch tested?
      
      Add regression tests for case-insensitive.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12371 from davies/case_insensi.
      62b7f306
    • Bryan Cutler's avatar
      [SPARK-14472][PYSPARK][ML] Cleanup ML JavaWrapper and related class hierarchy · fc3cd2f5
      Bryan Cutler authored
      Currently, JavaWrapper is only a wrapper class for pipeline classes that have Params and JavaCallable is a separate mixin that provides methods to make Java calls.  This change simplifies the class structure and to define the Java wrapper in a plain base class along with methods to make Java calls.  Also, renames Java wrapper classes to better reflect their purpose.
      
      Ran existing Python ml tests and generated documentation to test this change.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #12304 from BryanCutler/pyspark-cleanup-JavaWrapper-SPARK-14472.
      fc3cd2f5
    • Yuhao Yang's avatar
      [SPARK-13089][ML] [Doc] spark.ml Naive Bayes user guide and examples · 781df499
      Yuhao Yang authored
      jira: https://issues.apache.org/jira/browse/SPARK-13089
      
      Add section in ml-classification.md for NaiveBayes DataFrame-based API, plus example code (using include_example to clip code from examples/ folder files).
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #11015 from hhbyyh/naiveBayesDoc.
      781df499
    • Zheng RuiFeng's avatar
      [SPARK-14509][DOC] Add python CountVectorizerExample · fcdd6926
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      Add python CountVectorizerExample
      
      ## How was this patch tested?
      manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #11917 from zhengruifeng/cv_pe.
      fcdd6926
    • Yanbo Liang's avatar
      [SPARK-14375][ML] Unit test for spark.ml KMeansSummary · a91aaf5a
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      * Modify ```KMeansSummary.clusterSizes``` method to make it robust to empty clusters.
      * Add unit test for spark.ml ```KMeansSummary```.
      * Add Since tag.
      
      ## How was this patch tested?
      unit tests.
      
      cc jkbradley
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12254 from yanboliang/spark-14375.
      a91aaf5a
    • Yanbo Liang's avatar
      [SPARK-14461][ML] GLM training summaries should provide solver · 0d17593b
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      GLM training summaries should provide solver.
      
      ## How was this patch tested?
      Unit tests.
      
      cc jkbradley
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12253 from yanboliang/spark-14461.
      0d17593b
    • Yanbo Liang's avatar
      [SPARK-10386][MLLIB] PrefixSpanModel supports save/load · b0adb9f5
      Yanbo Liang authored
      ```PrefixSpanModel``` supports ```save/load```. It's similar with #9267.
      
      cc jkbradley
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #10664 from yanboliang/spark-10386.
      b0adb9f5
    • Davies Liu's avatar
      [SPARK-14581] [SQL] push predicatese through more logical plans · dbbe1490
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Right now, filter push down only works with Project, Aggregate, Generate and Join, they can't be pushed through many other plans.
      
      This PR added support for Union, Intersect, Except and all unary plans.
      
      ## How was this patch tested?
      
      Added tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12342 from davies/filter_hint.
      dbbe1490
    • Yanbo Liang's avatar
      [SPARK-13783][ML] Model export/import for spark.ml: GBTs · f9d578ea
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      * Added save/load for ```GBTClassifier/GBTClassificationModel/GBTRegressor/GBTRegressionModel```.
      * Meanwhile, I modified ```EnsembleModelReadWrite.saveImpl/loadImpl``` to support save/load ```treeWeights```.
      
      ## How was this patch tested?
      Adds standard unit tests for GBT save/load.
      
      cc jkbradley GayathriMurali
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12230 from yanboliang/spark-13783.
      f9d578ea
    • Andrew Or's avatar
      [SPARK-14388][SQL] Implement CREATE TABLE · 7d2ed8cc
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      This patch implements the `CREATE TABLE` command using the `SessionCatalog`. Previously we handled only `CTAS` and `CREATE TABLE ... USING`. This requires us to refactor `CatalogTable` to accept various fields (e.g. bucket and skew columns) and pass them to Hive.
      
      WIP: Note that I haven't verified whether this actually works yet! But I believe it does.
      
      ## How was this patch tested?
      
      Tests will come in a future commit.
      
      Author: Andrew Or <andrew@databricks.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #12271 from andrewor14/create-table-ddl.
      7d2ed8cc
    • Timothy Hunter's avatar
      [SPARK-14568][ML] Instrumentation framework for logistic regression · 1018a1c1
      Timothy Hunter authored
      ## What changes were proposed in this pull request?
      
      This adds extra logging information about a `LogisticRegression` estimator when being fit on a dataset. With this PR, you see the following extra lines when running the example in the documentation:
      
      ```
      16/04/13 07:19:00 INFO Instrumentation: Instrumentation(LogisticRegression-logreg_55dd3c09f164-1230977381-1): training: numPartitions=1 storageLevel=StorageLevel(disk=true, memory=true, offheap=false, deserialized=true, replication=1)
      16/04/13 07:19:00 INFO Instrumentation: Instrumentation(LogisticRegression-logreg_55dd3c09f164-1230977381-1): {"regParam":0.3,"elasticNetParam":0.8,"maxIter":10}
      ...
      16/04/12 11:48:07 INFO Instrumentation: Instrumentation(LogisticRegression-logreg_a89eb23cb386-358781145):numClasses=2
      16/04/12 11:48:07 INFO Instrumentation: Instrumentation(LogisticRegression-logreg_a89eb23cb386-358781145):numFeatures=692
      ...
      16/04/13 07:19:01 INFO Instrumentation: Instrumentation(LogisticRegression-logreg_55dd3c09f164-1230977381-1): training finished
      ```
      
      ## How was this patch tested?
      
      This PR was manually tested.
      
      Author: Timothy Hunter <timhunter@databricks.com>
      
      Closes #12331 from thunterdb/1604-instrumentation.
      1018a1c1
    • Xiangrui Meng's avatar
    • Charles Allen's avatar
      [SPARK-14537][CORE] Make TaskSchedulerImpl waiting fail if context is shut down · dd11e401
      Charles Allen authored
      This patch makes the postStartHook throw an IllegalStateException if the SparkContext is shutdown while it is waiting for the backend to be ready
      
      Author: Charles Allen <charles@allen-net.com>
      
      Closes #12301 from drcrallen/SPARK-14537.
      dd11e401
    • Liwei Lin's avatar
      [SPARK-13992][CORE][PYSPARK][FOLLOWUP] Update OFF_HEAP semantics for Java api and Python api · 23f93f55
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      - updated `OFF_HEAP` semantics for `StorageLevels.java`
      - updated `OFF_HEAP` semantics for `storagelevel.py`
      
      ## How was this patch tested?
      
      no need to test
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12126 from lw-lin/storagelevel.py.
      23f93f55
  3. Apr 12, 2016
    • Wenchen Fan's avatar
      [SPARK-14554][SQL][FOLLOW-UP] use checkDataset to check the result · a5f8c9b1
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      address this comment: https://github.com/apache/spark/pull/12322#discussion_r59417359
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12346 from cloud-fan/tmp.
      a5f8c9b1
    • hyukjinkwon's avatar
      [MINOR][SQL] Remove some unused imports in datasources. · 587cd554
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      It looks several recent commits for datasources (maybe while removing old `HadoopFsRelation` interface) missed removing some unused imports.
      
      This PR removes some unused imports in datasources.
      
      ## How was this patch tested?
      
      `sbt scalastyle` and some unit tests for them.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #12326 from HyukjinKwon/minor-imports.
      587cd554
    • Shixiong Zhu's avatar
      [SPARK-14579][SQL] Fix a race condition in StreamExecution.processAllAvailable · 768b3d62
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      There is a race condition in `StreamExecution.processAllAvailable`. Here is an execution order to reproduce it.
      
      | Time        |Thread 1           | MicroBatchThread  |
      |:-------------:|:-------------:|:-----:|
      | 1 | |  `dataAvailable in constructNextBatch` returns false  |
      | 2 | addData(newData)      |   |
      | 3 | `noNewData = false` in  processAllAvailable |  |
      | 4 | | noNewData = true |
      | 5 | `noNewData` is true so just return | |
      
      The root cause is that `checking dataAvailable and change noNewData to true` is not atomic. This PR puts these two actions into `synchronized` to make sure they are atomic.
      
      In addition, this PR also has the following changes:
      
      - Make `committedOffsets` and `availableOffsets` volatile to make sure they can be seen in other threads.
      - Copy the reference of `availableOffsets` to a local variable so that `sourceStatuses` can use a snapshot of `availableOffsets`.
      
      ## How was this patch tested?
      
      Existing unit tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12339 from zsxwing/race-condition.
      768b3d62
    • Davies Liu's avatar
      [SPARK-14578] [SQL] Fix codegen for CreateExternalRow with nested wide schema · 372baf04
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      The wide schema, the expression of fields will be splitted into multiple functions, but the variable for loopVar can't be accessed in splitted functions, this PR change them as class member.
      
      ## How was this patch tested?
      
      Added regression test.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12338 from davies/nested_row.
      372baf04
    • Sital Kedia's avatar
      [SPARK-14363] Fix executor OOM due to memory leak in the Sorter · d187e7de
      Sital Kedia authored
      ## What changes were proposed in this pull request?
      
      Fix memory leak in the Sorter. When the UnsafeExternalSorter spills the data to disk, it does not free up the underlying pointer array. As a result, we see a lot of executor OOM and also memory under utilization.
      This is a regression partially introduced in PR https://github.com/apache/spark/pull/9241
      
      ## How was this patch tested?
      
      Tested by running a job and observed around 30% speedup after this change.
      
      Author: Sital Kedia <skedia@fb.com>
      
      Closes #12285 from sitalkedia/executor_oom.
      d187e7de
    • Reynold Xin's avatar
      [SPARK-14547] Avoid DNS resolution for reusing connections · c439d88e
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch changes the connection creation logic in the network client module to avoid DNS resolution when reusing connections.
      
      ## How was this patch tested?
      Testing in production. This is too difficult to test in isolation (for high fidelity unit tests, we'd need to change the DNS resolution behavior in the JVM).
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12315 from rxin/SPARK-14547.
      c439d88e
    • Davies Liu's avatar
      [SPARK-14544] [SQL] improve performance of SQL UI tab · 1ef5f8cf
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      This PR improve the performance of SQL UI by:
      
      1) remove the details column in all executions page (the first page in SQL tab). We can check the details by enter the execution page.
      2) break-all is super slow in Chrome recently, so switch to break-word.
      3) Using "display: none" to hide a block.
      4) using one js closure for  for all the executions, not one for each.
      5) remove the height limitation of details, don't need to scroll it in the tiny window.
      
      ## How was this patch tested?
      
      Exists tests.
      
      ![ui](https://cloud.githubusercontent.com/assets/40902/14445712/68d7b258-0004-11e6-9b48-5d329b05d165.png)
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12311 from davies/ui_perf.
      1ef5f8cf
    • Terence Yim's avatar
      [SPARK-14513][CORE] Fix threads left behind after stopping SparkContext · 3e53de4b
      Terence Yim authored
      ## What changes were proposed in this pull request?
      
      Shutting down `QueuedThreadPool` used by Jetty `Server` to avoid threads leakage after SparkContext is stopped.
      
      Note: If this fix is going to apply to the `branch-1.6`, one more patch on the `NettyRpcEnv` class is needed so that the `NettyRpcEnv._fileServer.shutdown` is called in the `NettyRpcEnv.cleanup` method. This is due to the removal of `_fileServer` field in the `NettyRpcEnv` class in the master branch. Please advice if a second PR is necessary for bring this fix back to `branch-1.6`
      
      ## How was this patch tested?
      
      Ran the ./dev/run-tests locally
      
      Author: Terence Yim <terence@cask.co>
      
      Closes #12318 from chtyim/fixes/SPARK-14513-thread-leak.
      3e53de4b
    • bomeng's avatar
      [SPARK-14414][SQL] improve the error message class hierarchy · bcd20762
      bomeng authored
      ## What changes were proposed in this pull request?
      
      Before we are using `AnalysisException`, `ParseException`, `NoSuchFunctionException` etc when a parsing error encounters. I am trying to make it consistent and also **minimum** code impact to the current implementation by changing the class hierarchy.
      1. `NoSuchItemException` is removed, since it is an abstract class and it just simply takes a message string.
      2. `NoSuchDatabaseException`, `NoSuchTableException`, `NoSuchPartitionException` and `NoSuchFunctionException` now extends `AnalysisException`, as well as `ParseException`, they are all under `AnalysisException` umbrella, but you can also determine how to use them in a granular way.
      
      ## How was this patch tested?
      The existing test cases should cover this patch.
      
      Author: bomeng <bmeng@us.ibm.com>
      
      Closes #12314 from bomeng/SPARK-14414.
      bcd20762
    • Davies Liu's avatar
      [SPARK-14562] [SQL] improve constraints propagation in Union · 85e68b4b
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, Union only takes intersect of the constraints from it's children, all others are dropped, we should try to merge them together.
      
      This PR try to merge the constraints that have the same reference but came from different children, for example: `a > 10` and `a < 100` could be merged as `a > 10 || a < 100`.
      
      ## How was this patch tested?
      
      Added more cases in existing test.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12328 from davies/union_const.
      85e68b4b
    • Liwei Lin's avatar
      [SPARK-14556][SQL] Code clean-ups for package o.a.s.sql.execution.streaming.state · 852bbc6c
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      - `StateStoreConf.**max**DeltasForSnapshot` was renamed to `StateStoreConf.**min**DeltasForSnapshot`
      - some state switch checks were added
      - improved consistency between method names and string literals
      - other comments & typo fix
      
      ## How was this patch tested?
      
      N/A
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12323 from lw-lin/streaming-state-clean-up.
      852bbc6c
    • Yanbo Liang's avatar
      [SPARK-14147][ML][SPARKR] SparkR predict should not output feature column · 111a6247
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      SparkR does not support type of vector which is the default type of feature column in ML. R predict also does not output intermediate feature column. So SparkR ```predict``` should not output feature column. In this PR, I only fix this issue for ```naiveBayes``` and ```survreg```. ```kmeans``` has the right code route already and  ```glm``` will be fixed at SparkRWrapper refactor(#12294).
      
      ## How was this patch tested?
      No new tests.
      
      cc mengxr shivaram
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #11958 from yanboliang/spark-14147.
      111a6247
    • Xiangrui Meng's avatar
      [SPARK-14563][ML] use a random table name instead of __THIS__ in SQLTransformer · 1995c2e6
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      Use a random table name instead of `__THIS__` in SQLTransformer, and add a test for `transformSchema`. The problems of using `__THIS__` are:
      
      * It doesn't work under HiveContext (in Spark 1.6)
      * Race conditions
      
      ## How was this patch tested?
      
      * Manual test with HiveContext.
      * Added a unit test for `transformSchema` to improve coverage.
      
      cc: yhuai
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #12330 from mengxr/SPARK-14563.
      1995c2e6
    • Kai Jiang's avatar
      [SPARK-13597][PYSPARK][ML] Python API for GeneralizedLinearRegression · 7f024c47
      Kai Jiang authored
      ## What changes were proposed in this pull request?
      
      Python API for GeneralizedLinearRegression
      JIRA: https://issues.apache.org/jira/browse/SPARK-13597
      
      ## How was this patch tested?
      
      The patch is tested with Python doctest.
      
      Author: Kai Jiang <jiangkai@gmail.com>
      
      Closes #11468 from vectorijk/spark-13597.
      7f024c47
    • Yanbo Liang's avatar
      [SPARK-13322][ML] AFTSurvivalRegression supports feature standardization · 101663f1
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      AFTSurvivalRegression should support feature standardization, it will improve the convergence rate.
      Test the convergence rate on the [Ovarian](https://stat.ethz.ch/R-manual/R-devel/library/survival/html/ovarian.html) data which is standard data comes with Survival library in R,
      * without standardization(before this PR) -> 74 iterations.
      * with standardization(after this PR) -> 38 iterations.
      
      But after this fix, with or without ```standardization``` will converge to the same solution. It means that ```standardization = false``` will run the same code route as ```standardization = true```. Because if the features are not standardized at all, it will result convergency issue when the features have very different scales. This behavior is the same as ML [```LinearRegression``` and ```LogisticRegression```](https://issues.apache.org/jira/browse/SPARK-8522). See more discussion about this topic at #11247.
      cc mengxr
      ## How was this patch tested?
      unit test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #11365 from yanboliang/spark-13322.
      101663f1
    • Yanbo Liang's avatar
      [SPARK-12566][SPARK-14324][ML] GLM model family, link function support in SparkR:::glm · 75e05a5a
      Yanbo Liang authored
      * SparkR glm supports families and link functions which match R's signature for family.
      * SparkR glm API refactor. The comparative standard of the new API is R glm, so I only expose the arguments that R glm supports: ```formula, family, data, epsilon and maxit```.
      * This PR is focus on glm() and predict(), summary statistics will be done in a separate PR after this get in.
      * This PR depends on #12287 which make GLMs support link prediction at Scala side. After that merged, I will add more tests for predict() to this PR.
      
      Unit tests.
      
      cc mengxr jkbradley hhbyyh
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12294 from yanboliang/spark-12566.
      75e05a5a
    • Shixiong Zhu's avatar
      [SPARK-14474][SQL] Move FileSource offset log into checkpointLocation · 6bf69214
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Now that we have a single location for storing checkpointed state. This PR just propagates the checkpoint location into FileStreamSource so that we don't have one random log off on its own.
      
      ## How was this patch tested?
      
      test("metadataPath should be in checkpointLocation")
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12247 from zsxwing/file-source-log-location.
      6bf69214
    • Yong Tang's avatar
      [SPARK-3724][ML] RandomForest: More options for feature subset size. · da60b34d
      Yong Tang authored
      ## What changes were proposed in this pull request?
      
      This PR tries to support more options for feature subset size in RandomForest implementation. Previously, RandomForest only support "auto", "all", "sort", "log2", "onethird". This PR tries to support any given value to allow model search.
      
      In this PR, `featureSubsetStrategy` could be passed with:
      a) a real number in the range of `(0.0-1.0]` that represents the fraction of the number of features in each subset,
      b)  an integer number (`>0`) that represents the number of features in each subset.
      
      ## How was this patch tested?
      
      Two tests `JavaRandomForestClassifierSuite` and `JavaRandomForestRegressorSuite` have been updated to check the additional options for params in this PR.
      An additional test has been added to `org.apache.spark.mllib.tree.RandomForestSuite` to cover the cases in this PR.
      
      Author: Yong Tang <yong.tang.github@outlook.com>
      
      Closes #11989 from yongtang/SPARK-3724.
      da60b34d
    • Cheng Lian's avatar
      [SPARK-14488][SPARK-14493][SQL] "CREATE TEMPORARY TABLE ... USING ... AS... · 124cbfb6
      Cheng Lian authored
      [SPARK-14488][SPARK-14493][SQL] "CREATE TEMPORARY TABLE ... USING ... AS SELECT" shouldn't create persisted table
      
      ## What changes were proposed in this pull request?
      
      When planning logical plan node `CreateTableUsingAsSelect`, we neglected its `temporary` field and always generates a `CreateMetastoreDataSourceAsSelect`. This PR fixes this issue generating `CreateTempTableUsingAsSelect` when `temporary` is true.
      
      This PR also fixes SPARK-14493 since the root cause of SPARK-14493 is that we were `CreateMetastoreDataSourceAsSelect` uses default Hive warehouse location when `PATH` data source option is absent.
      
      ## How was this patch tested?
      
      Added a test case to create a temporary table using the target syntax and check whether it's indeed a temporary table.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #12303 from liancheng/spark-14488-fix-ctas-using.
      124cbfb6
Loading