Skip to content
Snippets Groups Projects
  1. Sep 22, 2016
    • Zhenhua Wang's avatar
      [SPARK-17625][SQL] set expectedOutputAttributes when converting... · de7df7de
      Zhenhua Wang authored
      [SPARK-17625][SQL] set expectedOutputAttributes when converting SimpleCatalogRelation to LogicalRelation
      
      ## What changes were proposed in this pull request?
      
      We should set expectedOutputAttributes when converting SimpleCatalogRelation to LogicalRelation, otherwise the outputs of LogicalRelation are different from outputs of SimpleCatalogRelation - they have different exprId's.
      
      ## How was this patch tested?
      
      add a test case
      
      Author: Zhenhua Wang <wzh_zju@163.com>
      
      Closes #15182 from wzhfy/expectedAttributes.
      de7df7de
    • gatorsmile's avatar
      [SPARK-17492][SQL] Fix Reading Cataloged Data Sources without Extending SchemaRelationProvider · 3a80f92f
      gatorsmile authored
      ### What changes were proposed in this pull request?
      For data sources without extending `SchemaRelationProvider`, we expect users to not specify schemas when they creating tables. If the schema is input from users, an exception is issued.
      
      Since Spark 2.1, for any data source, to avoid infer the schema every time, we store the schema in the metastore catalog. Thus, when reading a cataloged data source table, the schema could be read from metastore catalog. In this case, we also got an exception. For example,
      
      ```Scala
      sql(
        s"""
           |CREATE TABLE relationProvierWithSchema
           |USING org.apache.spark.sql.sources.SimpleScanSource
           |OPTIONS (
           |  From '1',
           |  To '10'
           |)
         """.stripMargin)
      spark.table(tableName).show()
      ```
      ```
      org.apache.spark.sql.sources.SimpleScanSource does not allow user-specified schemas.;
      ```
      
      This PR is to fix the above issue. When building a data source, we introduce a flag `isSchemaFromUsers` to indicate whether the schema is really input from users. If true, we issue an exception. Otherwise, we will call the `createRelation` of `RelationProvider` to generate the `BaseRelation`, in which it contains the actual schema.
      
      ### How was this patch tested?
      Added a few cases.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15046 from gatorsmile/tempViewCases.
      3a80f92f
    • Yadong Qi's avatar
      [SPARK-17425][SQL] Override sameResult in HiveTableScanExec to make... · cb324f61
      Yadong Qi authored
      [SPARK-17425][SQL] Override sameResult in HiveTableScanExec to make ReuseExchange work in text format table
      
      ## What changes were proposed in this pull request?
      The PR will override the `sameResult` in `HiveTableScanExec` to make `ReuseExchange` work in text format table.
      
      ## How was this patch tested?
      # SQL
      ```sql
      SELECT * FROM src t1
      JOIN src t2 ON t1.key = t2.key
      JOIN src t3 ON t1.key = t3.key;
      ```
      
      # Before
      ```
      == Physical Plan ==
      *BroadcastHashJoin [key#30], [key#34], Inner, BuildRight
      :- *BroadcastHashJoin [key#30], [key#32], Inner, BuildRight
      :  :- *Filter isnotnull(key#30)
      :  :  +- HiveTableScan [key#30, value#31], MetastoreRelation default, src
      :  +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
      :     +- *Filter isnotnull(key#32)
      :        +- HiveTableScan [key#32, value#33], MetastoreRelation default, src
      +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
         +- *Filter isnotnull(key#34)
            +- HiveTableScan [key#34, value#35], MetastoreRelation default, src
      ```
      
      # After
      ```
      == Physical Plan ==
      *BroadcastHashJoin [key#2], [key#6], Inner, BuildRight
      :- *BroadcastHashJoin [key#2], [key#4], Inner, BuildRight
      :  :- *Filter isnotnull(key#2)
      :  :  +- HiveTableScan [key#2, value#3], MetastoreRelation default, src
      :  +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
      :     +- *Filter isnotnull(key#4)
      :        +- HiveTableScan [key#4, value#5], MetastoreRelation default, src
      +- ReusedExchange [key#6, value#7], BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
      ```
      
      cc: davies cloud-fan
      
      Author: Yadong Qi <qiyadong2010@gmail.com>
      
      Closes #14988 from watermen/SPARK-17425.
      cb324f61
  2. Sep 21, 2016
    • Wenchen Fan's avatar
      [SPARK-17609][SQL] SessionCatalog.tableExists should not check temp view · b50b34f5
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      After #15054 , there is no place in Spark SQL that need `SessionCatalog.tableExists` to check temp views, so this PR makes `SessionCatalog.tableExists` only check permanent table/view and removes some hacks.
      
      This PR also improves the `getTempViewOrPermanentTableMetadata` that is introduced in  #15054 , to make the code simpler.
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15160 from cloud-fan/exists.
      b50b34f5
    • Davies Liu's avatar
      [SPARK-17494][SQL] changePrecision() on compact decimal should respect rounding mode · 8bde03bf
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Floor()/Ceil() of decimal is implemented using changePrecision() by passing a rounding mode, but the rounding mode is not respected when the decimal is in compact mode (could fit within a Long).
      
      This Update the changePrecision() to respect rounding mode, which could be ROUND_FLOOR, ROUND_CEIL, ROUND_HALF_UP, ROUND_HALF_EVEN.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15154 from davies/decimal_round.
      8bde03bf
    • Michael Armbrust's avatar
      [SPARK-17627] Mark Streaming Providers Experimental · 3497ebe5
      Michael Armbrust authored
      All of structured streaming is experimental in its first release.  We missed the annotation on two of the APIs.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #15188 from marmbrus/experimentalApi.
      3497ebe5
    • Yanbo Liang's avatar
      [SPARK-17315][FOLLOW-UP][SPARKR][ML] Fix print of Kolmogorov-Smirnov test summary · 6902edab
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      #14881 added Kolmogorov-Smirnov Test wrapper to SparkR. I found that ```print.summary.KSTest``` was implemented inappropriately and result in no effect.
      Running the following code for KSTest:
      ```Scala
      data <- data.frame(test = c(0.1, 0.15, 0.2, 0.3, 0.25, -1, -0.5))
      df <- createDataFrame(data)
      testResult <- spark.kstest(df, "test", "norm")
      summary(testResult)
      ```
      Before this PR:
      ![image](https://cloud.githubusercontent.com/assets/1962026/18615016/b9a2823a-7d4f-11e6-934b-128beade355e.png)
      After this PR:
      ![image](https://cloud.githubusercontent.com/assets/1962026/18615014/aafe2798-7d4f-11e6-8b99-c705bb9fe8f2.png)
      The new implementation is similar with [```print.summary.GeneralizedLinearRegressionModel```](https://github.com/apache/spark/blob/master/R/pkg/R/mllib.R#L284) of SparkR and [```print.summary.glm```](https://svn.r-project.org/R/trunk/src/library/stats/R/glm.R) of native R.
      
      BTW, I removed the comparison of ```print.summary.KSTest``` in unit test, since it's only wrappers of the summary output which has been checked. Another reason is that these comparison will output summary information to the test console, it will make the test output in a mess.
      
      ## How was this patch tested?
      Existing test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15139 from yanboliang/spark-17315.
      6902edab
    • Yanbo Liang's avatar
      [SPARK-17577][SPARKR][CORE] SparkR support add files to Spark job and get by executors · c133907c
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Scala/Python users can add files to Spark job by submit options ```--files``` or ```SparkContext.addFile()```. Meanwhile, users can get the added file by ```SparkFiles.get(filename)```.
      We should also support this function for SparkR users, since they also have the requirements for some shared dependency files. For example, SparkR users can download third party R packages to driver firstly, add these files to the Spark job as dependency by this API and then each executor can install these packages by ```install.packages```.
      
      ## How was this patch tested?
      Add unit test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15131 from yanboliang/spark-17577.
      c133907c
    • Burak Yavuz's avatar
      [SPARK-17569] Make StructuredStreaming FileStreamSource batch generation faster · 7cbe2164
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      While getting the batch for a `FileStreamSource` in StructuredStreaming, we know which files we must take specifically. We already have verified that they exist, and have committed them to a metadata log. When creating the FileSourceRelation however for an incremental execution, the code checks the existence of every single file once again!
      
      When you have 100,000s of files in a folder, creating the first batch takes 2 hours+ when working with S3! This PR disables that check
      
      ## How was this patch tested?
      
      Added a unit test to `FileStreamSource`.
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15122 from brkyvz/SPARK-17569.
      7cbe2164
    • jerryshao's avatar
      [SPARK-17512][CORE] Avoid formatting to python path for yarn and mesos cluster mode · 8c3ee2bc
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Yarn and mesos cluster mode support remote python path (HDFS/S3 scheme) by their own mechanism, it is not necessary to check and format the python when running on these modes. This is a potential regression compared to 1.6, so here propose to fix it.
      
      ## How was this patch tested?
      
      Unit test to verify SparkSubmit arguments, also with local cluster verification. Because of lack of `MiniDFSCluster` support in Spark unit test, there's no integration test added.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #15137 from jerryshao/SPARK-17512.
      8c3ee2bc
    • Imran Rashid's avatar
      [SPARK-17623][CORE] Clarify type of TaskEndReason with a failed task. · 9fcf1c51
      Imran Rashid authored
      ## What changes were proposed in this pull request?
      
      In TaskResultGetter, enqueueFailedTask currently deserializes the result
      as a TaskEndReason. But the type is actually more specific, its a
      TaskFailedReason. This just leads to more blind casting later on – it
      would be more clear if the msg was cast to the right type immediately,
      so method parameter types could be tightened.
      
      ## How was this patch tested?
      
      Existing unit tests via jenkins.  Note that the code was already performing a blind-cast to a TaskFailedReason before in any case, just in a different spot, so there shouldn't be any behavior change.
      
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #15181 from squito/SPARK-17623.
      9fcf1c51
    • Marcelo Vanzin's avatar
      [SPARK-4563][CORE] Allow driver to advertise a different network address. · 2cd1bfa4
      Marcelo Vanzin authored
      The goal of this feature is to allow the Spark driver to run in an
      isolated environment, such as a docker container, and be able to use
      the host's port forwarding mechanism to be able to accept connections
      from the outside world.
      
      The change is restricted to the driver: there is no support for achieving
      the same thing on executors (or the YARN AM for that matter). Those still
      need full access to the outside world so that, for example, connections
      can be made to an executor's block manager.
      
      The core of the change is simple: add a new configuration that tells what's
      the address the driver should bind to, which can be different than the address
      it advertises to executors (spark.driver.host). Everything else is plumbing
      the new configuration where it's needed.
      
      To use the feature, the host starting the container needs to set up the
      driver's port range to fall into a range that is being forwarded; this
      required the block manager port to need a special configuration just for
      the driver, which falls back to the existing spark.blockManager.port when
      not set. This way, users can modify the driver settings without affecting
      the executors; it would theoretically be nice to also have different
      retry counts for driver and executors, but given that docker (at least)
      allows forwarding port ranges, we can probably live without that for now.
      
      Because of the nature of the feature it's kinda hard to add unit tests;
      I just added a simple one to make sure the configuration works.
      
      This was tested with a docker image running spark-shell with the following
      command:
      
       docker blah blah blah \
         -p 38000-38100:38000-38100 \
         [image] \
         spark-shell \
           --num-executors 3 \
           --conf spark.shuffle.service.enabled=false \
           --conf spark.dynamicAllocation.enabled=false \
           --conf spark.driver.host=[host's address] \
           --conf spark.driver.port=38000 \
           --conf spark.driver.blockManager.port=38020 \
           --conf spark.ui.port=38040
      
      Running on YARN; verified the driver works, executors start up and listen
      on ephemeral ports (instead of using the driver's config), and that caching
      and shuffling (without the shuffle service) works. Clicked through the UI
      to make sure all pages (including executor thread dumps) worked. Also tested
      apps without docker, and ran unit tests.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #15120 from vanzin/SPARK-4563.
      2cd1bfa4
    • Sean Owen's avatar
      [SPARK-11918][ML] Better error from WLS for cases like singular input · b4a4421b
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Update error handling for Cholesky decomposition to provide a little more info when input is singular.
      
      ## How was this patch tested?
      
      New test case; jenkins tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15177 from srowen/SPARK-11918.
      b4a4421b
    • Josh Rosen's avatar
      [SPARK-17418] Prevent kinesis-asl-assembly artifacts from being published · d7ee1221
      Josh Rosen authored
      This patch updates the `kinesis-asl-assembly` build to prevent that module from being published as part of Maven releases and snapshot builds.
      
      The `kinesis-asl-assembly` includes classes from the Kinesis Client Library (KCL) and Kinesis Producer Library (KPL), both of which are licensed under the Amazon Software License and are therefore prohibited from being distributed in Apache releases.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15167 from JoshRosen/stop-publishing-kinesis-assembly.
      d7ee1221
    • Liang-Chi Hsieh's avatar
      [SPARK-17590][SQL] Analyze CTE definitions at once and allow CTE subquery to define CTE · 248922fd
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      We substitute logical plan with CTE definitions in the analyzer rule CTESubstitution. A CTE definition can be used in the logical plan for multiple times, and its analyzed logical plan should be the same. We should not analyze CTE definitions multiple times when they are reused in the query.
      
      By analyzing CTE definitions before substitution, we can support defining CTE in subquery.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #15146 from viirya/cte-analysis-once.
      248922fd
    • erenavsarogullari's avatar
      [CORE][MINOR] Add minor code change to TaskState and Task · dd7561d3
      erenavsarogullari authored
      ## What changes were proposed in this pull request?
      - TaskState and ExecutorState expose isFailed and isFinished functions. It can be useful to add test coverage for different states. Currently, Other enums do not expose any functions so this PR aims just these two enums.
      - `private` access modifier is added for Finished Task States Set
      - A minor doc change is added.
      
      ## How was this patch tested?
      New Unit tests are added and run locally.
      
      Author: erenavsarogullari <erenavsarogullari@gmail.com>
      
      Closes #15143 from erenavsarogullari/SPARK-17584.
      Unverified
      dd7561d3
    • hyukjinkwon's avatar
      [SPARK-17583][SQL] Remove uesless rowSeparator variable and set auto-expanding... · 25a020be
      hyukjinkwon authored
      [SPARK-17583][SQL] Remove uesless rowSeparator variable and set auto-expanding buffer as default for maxCharsPerColumn option in CSV
      
      ## What changes were proposed in this pull request?
      
      This PR includes the changes below:
      
      1. Upgrade Univocity library from 2.1.1 to 2.2.1
      
        This includes some performance improvement and also enabling auto-extending buffer in `maxCharsPerColumn` option in CSV. Please refer the [release notes](https://github.com/uniVocity/univocity-parsers/releases).
      
      2. Remove useless `rowSeparator` variable existing in `CSVOptions`
      
        We have this unused variable in [CSVOptions.scala#L127](https://github.com/apache/spark/blob/29952ed096fd2a0a19079933ff691671d6f00835/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala#L127) but it seems possibly causing confusion that it actually does not care of `\r\n`. For example, we have an issue open about this, [SPARK-17227](https://issues.apache.org/jira/browse/SPARK-17227), describing this variable.
      
        This variable is virtually not being used because we rely on `LineRecordReader` in Hadoop which deals with only both `\n` and `\r\n`.
      
      3. Set the default value of `maxCharsPerColumn` to auto-expending.
      
        We are setting 1000000 for the length of each column. It'd be more sensible we allow auto-expending rather than fixed length by default.
      
        To make sure, using `-1` is being described in the release note, [2.2.0](https://github.com/uniVocity/univocity-parsers/releases/tag/v2.2.0).
      
      ## How was this patch tested?
      
      N/A
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15138 from HyukjinKwon/SPARK-17583.
      Unverified
      25a020be
    • VinceShieh's avatar
      [SPARK-17219][ML] Add NaN value handling in Bucketizer · 57dc326b
      VinceShieh authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue when Bucketizer is called to handle a dataset containing NaN value.
      Sometimes, null value might also be useful to users, so in these cases, Bucketizer should
      reserve one extra bucket for NaN values, instead of throwing an illegal exception.
      Before:
      ```
      Bucketizer.transform on NaN value threw an illegal exception.
      ```
      After:
      ```
      NaN values will be grouped in an extra bucket.
      ```
      ## How was this patch tested?
      New test cases added in `BucketizerSuite`.
      Signed-off-by: VinceShieh <vincent.xieintel.com>
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #14858 from VinceShieh/spark-17219.
      Unverified
      57dc326b
    • Peng, Meng's avatar
      [SPARK-17017][MLLIB][ML] add a chiSquare Selector based on False Positive Rate (FPR) test · b366f184
      Peng, Meng authored
      ## What changes were proposed in this pull request?
      
      Univariate feature selection works by selecting the best features based on univariate statistical tests. False Positive Rate (FPR) is a popular univariate statistical test for feature selection. We add a chiSquare Selector based on False Positive Rate (FPR) test in this PR, like it is implemented in scikit-learn.
      http://scikit-learn.org/stable/modules/feature_selection.html#univariate-feature-selection
      
      ## How was this patch tested?
      
      Add Scala ut
      
      Author: Peng, Meng <peng.meng@intel.com>
      
      Closes #14597 from mpjlu/fprChiSquare.
      Unverified
      b366f184
    • Burak Yavuz's avatar
      [SPARK-17599] Prevent ListingFileCatalog from failing if path doesn't exist · 28fafa3e
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      The `ListingFileCatalog` lists files given a set of resolved paths. If a folder is deleted at any time between the paths were resolved and the file catalog can check for the folder, the Spark job fails. This may abruptly stop long running StructuredStreaming jobs for example.
      
      Folders may be deleted by users or automatically by retention policies. These cases should not prevent jobs from successfully completing.
      
      ## How was this patch tested?
      
      Unit test in `FileCatalogSuite`
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15153 from brkyvz/SPARK-17599.
      28fafa3e
    • Sean Zhong's avatar
      [SPARK-17617][SQL] Remainder(%) expression.eval returns incorrect result on double value · 3977223a
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      Remainder(%) expression's `eval()` returns incorrect result when the dividend is a big double. The reason is that Remainder converts the double dividend to decimal to do "%", and that lose precision.
      
      This bug only affects the `eval()` that is used by constant folding, the codegen path is not impacted.
      
      ### Before change
      ```
      scala> -5083676433652386516D % 10
      res2: Double = -6.0
      
      scala> spark.sql("select -5083676433652386516D % 10 as a").show
      +---+
      |  a|
      +---+
      |0.0|
      +---+
      ```
      
      ### After change
      ```
      scala> spark.sql("select -5083676433652386516D % 10 as a").show
      +----+
      |   a|
      +----+
      |-6.0|
      +----+
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #15171 from clockfly/SPARK-17617.
      3977223a
    • William Benton's avatar
      [SPARK-17595][MLLIB] Use a bounded priority queue to find synonyms in Word2VecModel · 7654385f
      William Benton authored
      ## What changes were proposed in this pull request?
      
      The code in `Word2VecModel.findSynonyms` to choose the vocabulary elements with the highest similarity to the query vector currently sorts the collection of similarities for every vocabulary element. This involves making multiple copies of the collection of similarities while doing a (relatively) expensive sort. It would be more efficient to find the best matches by maintaining a bounded priority queue and populating it with a single pass over the vocabulary, and that is exactly what this patch does.
      
      ## How was this patch tested?
      
      This patch adds no user-visible functionality and its correctness should be exercised by existing tests.  To ensure that this approach is actually faster, I made a microbenchmark for `findSynonyms`:
      
      ```
      object W2VTiming {
        import org.apache.spark.{SparkContext, SparkConf}
        import org.apache.spark.mllib.feature.Word2VecModel
        def run(modelPath: String, scOpt: Option[SparkContext] = None) {
          val sc = scOpt.getOrElse(new SparkContext(new SparkConf(true).setMaster("local[*]").setAppName("test")))
          val model = Word2VecModel.load(sc, modelPath)
          val keys = model.getVectors.keys
          val start = System.currentTimeMillis
          for(key <- keys) {
            model.findSynonyms(key, 5)
            model.findSynonyms(key, 10)
            model.findSynonyms(key, 25)
            model.findSynonyms(key, 50)
          }
          val finish = System.currentTimeMillis
          println("run completed in " + (finish - start) + "ms")
        }
      }
      ```
      
      I ran this test on a model generated from the complete works of Jane Austen and found that the new approach was over 3x faster than the old approach.  (If the `num` argument to `findSynonyms` is very close to the vocabulary size, the new approach will have less of an advantage over the old one.)
      
      Author: William Benton <willb@redhat.com>
      
      Closes #15150 from willb/SPARK-17595.
      Unverified
      7654385f
    • Yanbo Liang's avatar
      [SPARK-17585][PYSPARK][CORE] PySpark SparkContext.addFile supports adding files recursively · d3b88697
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Users would like to add a directory as dependency in some cases, they can use ```SparkContext.addFile``` with argument ```recursive=true``` to recursively add all files under the directory by using Scala. But Python users can only add file not directory, we should also make it supported.
      
      ## How was this patch tested?
      Unit test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15140 from yanboliang/spark-17585.
      d3b88697
    • wm624@hotmail.com's avatar
      [CORE][DOC] Fix errors in comments · 61876a42
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      While reading source code of CORE and SQL core, I found some minor errors in comments such as extra space, missing blank line and grammar error.
      
      I fixed these minor errors and might find more during my source code study.
      
      ## How was this patch tested?
      Manually build
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #15151 from wangmiao1981/mem.
      Unverified
      61876a42
    • jerryshao's avatar
      [SPARK-15698][SQL][STREAMING][FOLLW-UP] Fix FileStream source and sink log get configuration issue · e48ebc4e
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      This issue was introduced in the previous commit of SPARK-15698. Mistakenly change the way to get configuration back to original one, so here with the follow up PR to revert them up.
      
      ## How was this patch tested?
      
      N/A
      
      Ping zsxwing , please review again, sorry to bring the inconvenience. Thanks a lot.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #15173 from jerryshao/SPARK-15698-follow.
      e48ebc4e
  3. Sep 20, 2016
    • Weiqing Yang's avatar
      [MINOR][BUILD] Fix CheckStyle Error · 1ea49916
      Weiqing Yang authored
      ## What changes were proposed in this pull request?
      This PR is to fix the code style errors before 2.0.1 release.
      
      ## How was this patch tested?
      Manual.
      
      Before:
      ```
      ./dev/lint-java
      Using `mvn` from path: /usr/local/bin/mvn
      Checkstyle checks failed at following occurrences:
      [ERROR] src/main/java/org/apache/spark/network/client/TransportClient.java:[153] (sizes) LineLength: Line is longer than 100 characters (found 107).
      [ERROR] src/main/java/org/apache/spark/network/client/TransportClient.java:[196] (sizes) LineLength: Line is longer than 100 characters (found 108).
      [ERROR] src/main/java/org/apache/spark/network/client/TransportClient.java:[239] (sizes) LineLength: Line is longer than 100 characters (found 115).
      [ERROR] src/main/java/org/apache/spark/network/server/TransportRequestHandler.java:[119] (sizes) LineLength: Line is longer than 100 characters (found 107).
      [ERROR] src/main/java/org/apache/spark/network/server/TransportRequestHandler.java:[129] (sizes) LineLength: Line is longer than 100 characters (found 104).
      [ERROR] src/main/java/org/apache/spark/network/util/LevelDBProvider.java:[124,11] (modifier) ModifierOrder: 'static' modifier out of order with the JLS suggestions.
      [ERROR] src/main/java/org/apache/spark/network/util/TransportConf.java:[26] (regexp) RegexpSingleline: No trailing whitespace allowed.
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/PrefixComparators.java:[33] (sizes) LineLength: Line is longer than 100 characters (found 110).
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/PrefixComparators.java:[38] (sizes) LineLength: Line is longer than 100 characters (found 110).
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/PrefixComparators.java:[43] (sizes) LineLength: Line is longer than 100 characters (found 106).
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/PrefixComparators.java:[48] (sizes) LineLength: Line is longer than 100 characters (found 110).
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeInMemorySorter.java:[0] (misc) NewlineAtEndOfFile: File does not end with a newline.
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java:[67] (sizes) LineLength: Line is longer than 100 characters (found 106).
      [ERROR] src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java:[200] (regexp) RegexpSingleline: No trailing whitespace allowed.
      [ERROR] src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java:[309] (regexp) RegexpSingleline: No trailing whitespace allowed.
      [ERROR] src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java:[332] (regexp) RegexpSingleline: No trailing whitespace allowed.
      [ERROR] src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java:[348] (regexp) RegexpSingleline: No trailing whitespace allowed.
       ```
      After:
      ```
      ./dev/lint-java
      Using `mvn` from path: /usr/local/bin/mvn
      Checkstyle checks passed.
      ```
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #15170 from Sherry302/fixjavastyle.
      1ea49916
    • petermaxlee's avatar
      [SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata · 976f3b12
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This PR modifies StreamExecution such that it discards metadata for batches that have already been fully processed. I used the purge method that was added as part of SPARK-17235.
      
      This is a resubmission of 15126, which was based on work by frreiss in #15067, but fixed the test case along with some typos.
      
      ## How was this patch tested?
      A new test case in StreamingQuerySuite. The test case would fail without the changes in this pull request.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #15166 from petermaxlee/SPARK-17513-2.
      976f3b12
    • Marcelo Vanzin's avatar
      [SPARK-17611][YARN][TEST] Make shuffle service test really test auth. · 7e418e99
      Marcelo Vanzin authored
      Currently, the code is just swallowing exceptions, and not really checking
      whether the auth information was being recorded properly. Fix both problems,
      and also avoid tests inadvertently affecting other tests by modifying the
      shared config variable (by making it not shared).
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #15161 from vanzin/SPARK-17611.
      7e418e99
    • Yin Huai's avatar
      [SPARK-17549][SQL] Revert "[] Only collect table size stat in driver for cached relation." · 9ac68dbc
      Yin Huai authored
      This reverts commit 39e2bad6 because of the problem mentioned at https://issues.apache.org/jira/browse/SPARK-17549?focusedCommentId=15505060&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15505060
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #15157 from yhuai/revert-SPARK-17549.
      9ac68dbc
    • jerryshao's avatar
      [SPARK-15698][SQL][STREAMING] Add the ability to remove the old MetadataLog in FileStreamSource · a6aade00
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Current `metadataLog` in `FileStreamSource` will add a checkpoint file in each batch but do not have the ability to remove/compact, which will lead to large number of small files when running for a long time. So here propose to compact the old logs into one file. This method is quite similar to `FileStreamSinkLog` but simpler.
      
      ## How was this patch tested?
      
      Unit test added.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #13513 from jerryshao/SPARK-15698.
      a6aade00
    • Wenchen Fan's avatar
      [SPARK-17051][SQL] we should use hadoopConf in InsertIntoHiveTable · eb004c66
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Hive confs in hive-site.xml will be loaded in `hadoopConf`, so we should use `hadoopConf` in `InsertIntoHiveTable` instead of `SessionState.conf`
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #14634 from cloud-fan/bug.
      eb004c66
    • gatorsmile's avatar
      [SPARK-17502][SQL] Fix Multiple Bugs in DDL Statements on Temporary Views · d5ec5dbb
      gatorsmile authored
      ### What changes were proposed in this pull request?
      - When the permanent tables/views do not exist but the temporary view exists, the expected error should be `NoSuchTableException` for partition-related ALTER TABLE commands. However, it always reports a confusing error message. For example,
      ```
      Partition spec is invalid. The spec (a, b) must match the partition spec () defined in table '`testview`';
      ```
      - When the permanent tables/views do not exist but the temporary view exists, the expected error should be `NoSuchTableException` for `ALTER TABLE ... UNSET TBLPROPERTIES`. However, it reports a missing table property. For example,
      ```
      Attempted to unset non-existent property 'p' in table '`testView`';
      ```
      - When `ANALYZE TABLE` is called on a view or a temporary view, we should issue an error message. However, it reports a strange error:
      ```
      ANALYZE TABLE is not supported for Project
      ```
      
      - When inserting into a temporary view that is generated from `Range`, we will get the following error message:
      ```
      assertion failed: No plan for 'InsertIntoTable Range (0, 10, step=1, splits=Some(1)), false, false
      +- Project [1 AS 1#20]
         +- OneRowRelation$
      ```
      
      This PR is to fix the above four issues.
      
      ### How was this patch tested?
      Added multiple test cases
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15054 from gatorsmile/tempViewDDL.
      d5ec5dbb
    • Adrian Petrescu's avatar
      [SPARK-17437] Add uiWebUrl to JavaSparkContext and pyspark.SparkContext · 4a426ff8
      Adrian Petrescu authored
      ## What changes were proposed in this pull request?
      
      The Scala version of `SparkContext` has a handy field called `uiWebUrl` that tells you which URL the SparkUI spawned by that instance lives at. This is often very useful because the value for `spark.ui.port` in the config is only a suggestion; if that port number is taken by another Spark instance on the same machine, Spark will just keep incrementing the port until it finds a free one. So, on a machine with a lot of running PySpark instances, you often have to start trying all of them one-by-one until you find your application name.
      
      Scala users have a way around this with `uiWebUrl` but Java and Python users do not. This pull request fixes this in the most straightforward way possible, simply propagating this field through the `JavaSparkContext` and into pyspark through the Java gateway.
      
      Please let me know if any additional documentation/testing is needed.
      
      ## How was this patch tested?
      
      Existing tests were run to make sure there were no regressions, and a binary distribution was created and tested manually for the correct value of `sc.uiWebPort` in a variety of circumstances.
      
      Author: Adrian Petrescu <apetresc@gmail.com>
      
      Closes #15000 from apetresc/pyspark-uiweburl.
      Unverified
      4a426ff8
    • Wenchen Fan's avatar
      f039d964
    • petermaxlee's avatar
      [SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata · be9d57fc
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This PR modifies StreamExecution such that it discards metadata for batches that have already been fully processed. I used the purge method that was added as part of SPARK-17235.
      
      This is based on work by frreiss in #15067, but fixed the test case along with some typos.
      
      ## How was this patch tested?
      A new test case in StreamingQuerySuite. The test case would fail without the changes in this pull request.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      Author: frreiss <frreiss@us.ibm.com>
      
      Closes #15126 from petermaxlee/SPARK-17513.
      be9d57fc
  4. Sep 19, 2016
    • sethah's avatar
      [SPARK-17163][ML] Unified LogisticRegression interface · 26145a5a
      sethah authored
      ## What changes were proposed in this pull request?
      
      Merge `MultinomialLogisticRegression` into `LogisticRegression` and remove `MultinomialLogisticRegression`.
      
      Marked as WIP because we should discuss the coefficients API in the model. See discussion below.
      
      JIRA: [SPARK-17163](https://issues.apache.org/jira/browse/SPARK-17163)
      
      ## How was this patch tested?
      
      Merged test suites and added some new unit tests.
      
      ## Design
      
      ### Switching between binomial and multinomial
      
      We default to automatically detecting whether we should run binomial or multinomial lor. We expose a new parameter called `family` which defaults to auto. When "auto" is used, we run normal binomial lor with pivoting if there are 1 or 2 label classes. Otherwise, we run multinomial. If the user explicitly sets the family, then we abide by that setting. In the case where "binomial" is set but multiclass lor is detected, we throw an error.
      
      ### coefficients/intercept model API (TODO)
      
      This is the biggest design point remaining, IMO. We need to decide how to store the coefficients and intercepts in the model, and in turn how to expose them via the API. Two important points:
      
      * We must maintain compatibility with the old API, i.e. we must expose `def coefficients: Vector` and `def intercept: Double`
      * There are two separate cases: binomial lr where we have a single set of coefficients and a single intercept and multinomial lr where we have `numClasses` sets of coefficients and `numClasses` intercepts.
      
      Some options:
      
      1. **Store the binomial coefficients as a `2 x numFeatures` matrix.** This means that we would center the model coefficients before storing them in the model. The BLOR algorithm gives `1 * numFeatures` coefficients, but we would convert them to `2 x numFeatures` coefficients before storing them, effectively doubling the storage in the model. This has the advantage that we can make the code cleaner (i.e. less `if (isMultinomial) ... else ...`) and we don't have to reason about the different cases as much. It has the disadvantage that we double the storage space and we could see small regressions at prediction time since there are 2x the number of operations in the prediction algorithms. Additionally, we still have to produce the uncentered coefficients/intercept via the API, so we will have to either ALSO store the uncentered version, or compute it in `def coefficients: Vector` every time.
      
      2. **Store the binomial coefficients as a `1 x numFeatures` matrix.** We still store the coefficients as a matrix and the intercepts as a vector. When users call `coefficients` we return them a `Vector` that is backed by the same underlying array as the `coefficientMatrix`, so we don't duplicate any data. At prediction time, we use the old prediction methods that are specialized for binary LOR. The benefits here are that we don't store extra data, and we won't see any regressions in performance. The cost of this is that we have separate implementations for predict methods in the binary vs multiclass case. The duplicated code is really not very high, but it's still a bit messy.
      
      If we do decide to store the 2x coefficients, we would likely want to see some performance tests to understand the potential regressions.
      
      **Update:** We have chosen option 2
      
      ### Threshold/thresholds (TODO)
      
      Currently, when `threshold` is set we clear whatever value is in `thresholds` and when `thresholds` is set we clear whatever value is in `threshold`. [SPARK-11543](https://issues.apache.org/jira/browse/SPARK-11543) was created to prefer thresholds over threshold. We should decide if we should implement this behavior now or if we want to do it in a separate JIRA.
      
      **Update:** Let's leave it for a follow up PR
      
      ## Follow up
      
      * Summary model for multiclass logistic regression [SPARK-17139](https://issues.apache.org/jira/browse/SPARK-17139)
      * Thresholds vs threshold [SPARK-11543](https://issues.apache.org/jira/browse/SPARK-11543)
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #14834 from sethah/SPARK-17163.
      26145a5a
    • Josh Rosen's avatar
      [SPARK-17160] Properly escape field names in code-generated error messages · e719b1c0
      Josh Rosen authored
      This patch addresses a corner-case escaping bug where field names which contain special characters were unsafely interpolated into error message string literals in generated Java code, leading to compilation errors.
      
      This patch addresses these issues by using `addReferenceObj` to store the error messages as string fields rather than inline string constants.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15156 from JoshRosen/SPARK-17160.
      e719b1c0
    • Davies Liu's avatar
      [SPARK-17100] [SQL] fix Python udf in filter on top of outer join · d8104158
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      In optimizer, we try to evaluate the condition to see whether it's nullable or not, but some expressions are not evaluable, we should check that before evaluate it.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15103 from davies/udf_join.
      d8104158
    • Davies Liu's avatar
      [SPARK-16439] [SQL] bring back the separator in SQL UI · e0632062
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, the SQL metrics looks like `number of rows: 111111111111`, it's very hard to read how large the number is. So a separator was added by #12425, but removed by #14142, because the separator is weird in some locales (for example, pl_PL), this PR will add that back, but always use "," as the separator, since the SQL UI are all in English.
      
      ## How was this patch tested?
      
      Existing tests.
      ![metrics](https://cloud.githubusercontent.com/assets/40902/14573908/21ad2f00-030d-11e6-9e2c-c544f30039ea.png)
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15106 from davies/metric_sep.
      e0632062
    • Shixiong Zhu's avatar
      [SPARK-17438][WEBUI] Show Application.executorLimit in the application page · 80d66559
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR adds `Application.executorLimit` to the applicatino page
      
      ## How was this patch tested?
      
      Checked the UI manually.
      
      Screenshots:
      
      1. Dynamic allocation is disabled
      
      <img width="484" alt="screen shot 2016-09-07 at 4 21 49 pm" src="https://cloud.githubusercontent.com/assets/1000778/18332029/210056ea-7518-11e6-9f52-76d96046c1c0.png">
      
      2. Dynamic allocation is enabled.
      
      <img width="466" alt="screen shot 2016-09-07 at 4 25 30 pm" src="https://cloud.githubusercontent.com/assets/1000778/18332034/2c07700a-7518-11e6-8fce-aebe25014902.png">
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15001 from zsxwing/fix-core-info.
      80d66559
Loading