Skip to content
Snippets Groups Projects
  1. Jun 23, 2017
    • Takeshi Yamamuro's avatar
      [SPARK-21144][SQL] Print a warning if the data schema and partition schema... · f3dea607
      Takeshi Yamamuro authored
      [SPARK-21144][SQL] Print a warning if the data schema and partition schema have the duplicate columns
      
      ## What changes were proposed in this pull request?
      The current master outputs unexpected results when the data schema and partition schema have the duplicate columns:
      ```
      withTempPath { dir =>
        val basePath = dir.getCanonicalPath
        spark.range(0, 3).toDF("foo").write.parquet(new Path(basePath, "foo=1").toString)
        spark.range(0, 3).toDF("foo").write.parquet(new Path(basePath, "foo=a").toString)
        spark.read.parquet(basePath).show()
      }
      
      +---+
      |foo|
      +---+
      |  1|
      |  1|
      |  a|
      |  a|
      |  1|
      |  a|
      +---+
      ```
      This patch added code to print a warning when the duplication found.
      
      ## How was this patch tested?
      Manually checked.
      
      Author: Takeshi Yamamuro <yamamuro@apache.org>
      
      Closes #18375 from maropu/SPARK-21144-3.
      f3dea607
    • hyukjinkwon's avatar
      [SPARK-21193][PYTHON] Specify Pandas version in setup.py · 5dca10b8
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      It looks we missed specifying the Pandas version. This PR proposes to fix it. For the current state, it should be Pandas 0.13.0 given my test. This PR propose to fix it as 0.13.0.
      
      Running the codes below:
      
      ```python
      from pyspark.sql.types import *
      
      schema = StructType().add("a", IntegerType()).add("b", StringType())\
                           .add("c", BooleanType()).add("d", FloatType())
      data = [
          (1, "foo", True, 3.0,), (2, "foo", True, 5.0),
          (3, "bar", False, -1.0), (4, "bar", False, 6.0),
      ]
      spark.createDataFrame(data, schema).toPandas().dtypes
      ```
      
      prints ...
      
      **With Pandas 0.13.0** - released, 2014-01
      
      ```
      a      int32
      b     object
      c       bool
      d    float32
      dtype: object
      ```
      
      **With Pandas 0.12.0** -  - released, 2013-06
      
      ```
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
        File ".../spark/python/pyspark/sql/dataframe.py", line 1734, in toPandas
          pdf[f] = pdf[f].astype(t, copy=False)
      TypeError: astype() got an unexpected keyword argument 'copy'
      ```
      
      without `copy`
      
      ```
      a      int32
      b     object
      c       bool
      d    float32
      dtype: object
      ```
      
      **With Pandas 0.11.0** - released, 2013-03
      
      ```
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
        File ".../spark/python/pyspark/sql/dataframe.py", line 1734, in toPandas
          pdf[f] = pdf[f].astype(t, copy=False)
      TypeError: astype() got an unexpected keyword argument 'copy'
      ```
      
      without `copy`
      
      ```
      a      int32
      b     object
      c       bool
      d    float32
      dtype: object
      ```
      
      **With Pandas 0.10.0** -  released, 2012-12
      
      ```
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
        File ".../spark/python/pyspark/sql/dataframe.py", line 1734, in toPandas
          pdf[f] = pdf[f].astype(t, copy=False)
      TypeError: astype() got an unexpected keyword argument 'copy'
      ```
      
      without `copy`
      
      ```
      a      int64  # <- this should be 'int32'
      b     object
      c       bool
      d    float64  # <- this should be 'float32'
      ```
      
      ## How was this patch tested?
      
      Manually tested with Pandas from 0.10.0 to 0.13.0.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18403 from HyukjinKwon/SPARK-21193.
      5dca10b8
    • 10129659's avatar
      [SPARK-21115][CORE] If the cores left is less than the coresPerExecutor,the... · acd208ee
      10129659 authored
      [SPARK-21115][CORE] If the cores left is less than the coresPerExecutor,the cores left will not be allocated, so it should not to check in every schedule
      
      ## What changes were proposed in this pull request?
      If we start an app with the param --total-executor-cores=4 and spark.executor.cores=3, the cores left is always 1, so it will try to allocate executors in the function org.apache.spark.deploy.master.startExecutorsOnWorkers in every schedule.
      Another question is, is it will be better to allocate another executor with 1 core for the cores left.
      
      ## How was this patch tested?
      unit test
      
      Author: 10129659 <chen.yanshan@zte.com.cn>
      
      Closes #18322 from eatoncys/leftcores.
      acd208ee
    • jinxing's avatar
      [SPARK-21047] Add test suites for complicated cases in ColumnarBatchSuite · 153dd49b
      jinxing authored
      ## What changes were proposed in this pull request?
      Current ColumnarBatchSuite has very simple test cases for `Array` and `Struct`. This pr wants to add  some test suites for complicated cases in ColumnVector.
      
      Author: jinxing <jinxing6042@126.com>
      
      Closes #18327 from jinxing64/SPARK-21047.
      153dd49b
    • Tathagata Das's avatar
      [SPARK-21145][SS] Added StateStoreProviderId with queryRunId to reload... · fe24634d
      Tathagata Das authored
      [SPARK-21145][SS] Added StateStoreProviderId with queryRunId to reload StateStoreProviders when query is restarted
      
      ## What changes were proposed in this pull request?
      StateStoreProvider instances are loaded on-demand in a executor when a query is started. When a query is restarted, the loaded provider instance will get reused. Now, there is a non-trivial chance, that the task of the previous query run is still running, while the tasks of the restarted run has started. So for a stateful partition, there may be two concurrent tasks related to the same stateful partition, and there for using the same provider instance. This can lead to inconsistent results and possibly random failures, as state store implementations are not designed to be thread-safe.
      
      To fix this, I have introduced a `StateStoreProviderId`, that unique identifies a provider loaded in an executor. It has the query run id in it, thus making sure that restarted queries will force the executor to load a new provider instance, thus avoiding two concurrent tasks (from two different runs) from reusing the same provider instance.
      
      Additional minor bug fixes
      - All state stores related to query run is marked as deactivated in the `StateStoreCoordinator` so that the executors can unload them and clear resources.
      - Moved the code that determined the checkpoint directory of a state store from implementation-specific code (`HDFSBackedStateStoreProvider`) to non-specific code (StateStoreId), so that implementation do not accidentally get it wrong.
        - Also added store name to the path, to support multiple stores per sql operator partition.
      
      *Note:* This change does not address the scenario where two tasks of the same run (e.g. speculative tasks) are concurrently running in the same executor. The chance of this very small, because ideally speculative tasks should never run in the same executor.
      
      ## How was this patch tested?
      Existing unit tests + new unit test.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #18355 from tdas/SPARK-21145.
      fe24634d
  2. Jun 22, 2017
    • Wang Gengliang's avatar
      [SPARK-21174][SQL] Validate sampling fraction in logical operator level · b8a743b6
      Wang Gengliang authored
      ## What changes were proposed in this pull request?
      
      Currently the validation of sampling fraction in dataset is incomplete.
      As an improvement, validate sampling fraction in logical operator level:
      1) if with replacement: fraction should be nonnegative
      2) else: fraction should be on interval [0, 1]
      Also add test cases for the validation.
      
      ## How was this patch tested?
      integration tests
      
      gatorsmile cloud-fan
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Wang Gengliang <ltnwgl@gmail.com>
      
      Closes #18387 from gengliangwang/sample_ratio_validate.
      b8a743b6
    • Thomas Graves's avatar
      [SPARK-20923] turn tracking of TaskMetrics._updatedBlockStatuses off · 5b5a69be
      Thomas Graves authored
      ## What changes were proposed in this pull request?
      Turn tracking of TaskMetrics._updatedBlockStatuses off by default. As far as I can see its not used by anything and it uses a lot of memory when caching and processing a lot of blocks.  In my case it was taking 5GB of a 10GB heap and I even went up to 50GB heap and the job still ran out of memory.  With this change in place the same job easily runs in less then 10GB of heap.
      
      We leave the api there as well as a config to turn it back on just in case anyone is using it.  TaskMetrics is exposed via SparkListenerTaskEnd so if users are relying on it they can turn it back on.
      
      ## How was this patch tested?
      
      Ran unit tests that were modified and manually tested on a couple of jobs (with and without caching).  Clicked through the UI and didn't see anything missing.
      Ran my very large hive query job with 200,000 small tasks, 1000 executors, cached 6+TB of data this runs fine now whereas without this change it would go into full gcs and eventually die.
      
      Author: Thomas Graves <tgraves@thirteenroutine.corp.gq1.yahoo.com>
      Author: Tom Graves <tgraves@yahoo-inc.com>
      
      Closes #18162 from tgravescs/SPARK-20923.
      5b5a69be
    • Bryan Cutler's avatar
      [SPARK-13534][PYSPARK] Using Apache Arrow to increase performance of DataFrame.toPandas · e4469760
      Bryan Cutler authored
      ## What changes were proposed in this pull request?
      Integrate Apache Arrow with Spark to increase performance of `DataFrame.toPandas`.  This has been done by using Arrow to convert data partitions on the executor JVM to Arrow payload byte arrays where they are then served to the Python process.  The Python DataFrame can then collect the Arrow payloads where they are combined and converted to a Pandas DataFrame.  All non-complex data types are currently supported, otherwise an `UnsupportedOperation` exception is thrown.
      
      Additions to Spark include a Scala package private method `Dataset.toArrowPayloadBytes` that will convert data partitions in the executor JVM to `ArrowPayload`s as byte arrays so they can be easily served.  A package private class/object `ArrowConverters` that provide data type mappings and conversion routines.  In Python, a public method `DataFrame.collectAsArrow` is added to collect Arrow payloads and an optional flag in `toPandas(useArrow=False)` to enable using Arrow (uses the old conversion by default).
      
      ## How was this patch tested?
      Added a new test suite `ArrowConvertersSuite` that will run tests on conversion of Datasets to Arrow payloads for supported types.  The suite will generate a Dataset and matching Arrow JSON data, then the dataset is converted to an Arrow payload and finally validated against the JSON data.  This will ensure that the schema and data has been converted correctly.
      
      Added PySpark tests to verify the `toPandas` method is producing equal DataFrames with and without pyarrow.  A roundtrip test to ensure the pandas DataFrame produced by pyspark is equal to a one made directly with pandas.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      Author: Li Jin <ice.xelloss@gmail.com>
      Author: Li Jin <li.jin@twosigma.com>
      Author: Wes McKinney <wes.mckinney@twosigma.com>
      
      Closes #15821 from BryanCutler/wip-toPandas_with_arrow-SPARK-13534.
      e4469760
    • jinxing's avatar
      [SPARK-19937] Collect metrics for remote bytes read to disk during shuffle. · 58434acd
      jinxing authored
      In current code(https://github.com/apache/spark/pull/16989), big blocks are shuffled to disk.
      This pr proposes to collect metrics for remote bytes fetched to disk.
      
      Author: jinxing <jinxing6042@126.com>
      
      Closes #18249 from jinxing64/SPARK-19937.
      58434acd
    • Lubo Zhang's avatar
      [SPARK-20599][SS] ConsoleSink should work with (batch) · e55a105a
      Lubo Zhang authored
      ## What changes were proposed in this pull request?
      
      Currently, if we read a batch and want to display it on the console sink, it will lead a runtime exception.
      
      Changes:
      
      - In this PR, we add a match rule to check whether it is a ConsoleSinkProvider, we will display the Dataset
       if using console format.
      
      ## How was this patch tested?
      
      spark.read.schema().json(path).write.format("console").save
      
      Author: Lubo Zhang <lubo.zhang@intel.com>
      Author: lubozhan <lubo.zhang@intel.com>
      
      Closes #18347 from lubozhan/dev.
      e55a105a
    • actuaryzhang's avatar
      [SPARK-20889][SPARKR] Grouped documentation for DATETIME column methods · 19331b8e
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      Grouped documentation for datetime column methods.
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #18114 from actuaryzhang/sparkRDocDate.
      19331b8e
    • Xingbo Jiang's avatar
      [SPARK-20832][CORE] Standalone master should explicitly inform drivers of... · 2dadea95
      Xingbo Jiang authored
      [SPARK-20832][CORE] Standalone master should explicitly inform drivers of worker deaths and invalidate external shuffle service outputs
      
      ## What changes were proposed in this pull request?
      
      In standalone mode, master should explicitly inform each active driver of any worker deaths, so the invalid external shuffle service outputs on the lost host would be removed from the shuffle mapStatus, thus we can avoid future `FetchFailure`s.
      
      ## How was this patch tested?
      Manually tested by the following steps:
      1. Start a standalone Spark cluster with one driver node and two worker nodes;
      2. Run a Job with ShuffleMapStage, ensure the outputs distribute on each worker;
      3. Run another Job to make all executors exit, but the workers are all alive;
      4. Kill one of the workers;
      5. Run rdd.collect(), before this change, we should see `FetchFailure`s and failed Stages, while after the change, the job should complete without failure.
      
      Before the change:
      ![image](https://user-images.githubusercontent.com/4784782/27335366-c251c3d6-55fe-11e7-99dd-d1fdcb429210.png)
      
      After the change:
      ![image](https://user-images.githubusercontent.com/4784782/27335393-d1c71640-55fe-11e7-89ed-bd760f1f39af.png)
      
      Author: Xingbo Jiang <xingbo.jiang@databricks.com>
      
      Closes #18362 from jiangxb1987/removeWorker.
      2dadea95
    • actuaryzhang's avatar
      [SQL][DOC] Fix documentation of lpad · 97b307c8
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      Fix incomplete documentation for `lpad`.
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #18367 from actuaryzhang/SQLDoc.
      97b307c8
    • hyukjinkwon's avatar
      [SPARK-21163][SQL] DataFrame.toPandas should respect the data type · 67c75021
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Currently we convert a spark DataFrame to Pandas Dataframe by `pd.DataFrame.from_records`. It infers the data type from the data and doesn't respect the spark DataFrame Schema. This PR fixes it.
      
      ## How was this patch tested?
      
      a new regression test
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: Wenchen Fan <wenchen@databricks.com>
      Author: Wenchen Fan <cloud0fan@gmail.com>
      
      Closes #18378 from cloud-fan/to_pandas.
      67c75021
    • Shixiong Zhu's avatar
      [SPARK-21167][SS] Decode the path generated by File sink to handle special characters · d66b143e
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Decode the path generated by File sink to handle special characters.
      
      ## How was this patch tested?
      
      The added unit test.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #18381 from zsxwing/SPARK-21167.
      d66b143e
  3. Jun 21, 2017
    • wangmiao1981's avatar
      [SPARK-20906][SPARKR] Constrained Logistic Regression for SparkR · 53543374
      wangmiao1981 authored
      ## What changes were proposed in this pull request?
      
      PR https://github.com/apache/spark/pull/17715 Added Constrained Logistic Regression for ML. We should add it to SparkR.
      
      ## How was this patch tested?
      
      Add new unit tests.
      
      Author: wangmiao1981 <wm624@hotmail.com>
      
      Closes #18128 from wangmiao1981/test.
      53543374
    • zero323's avatar
      [SPARK-20830][PYSPARK][SQL] Add posexplode and posexplode_outer · 215281d8
      zero323 authored
      ## What changes were proposed in this pull request?
      
      Add Python wrappers for `o.a.s.sql.functions.explode_outer` and `o.a.s.sql.functions.posexplode_outer`.
      
      ## How was this patch tested?
      
      Unit tests, doctests.
      
      Author: zero323 <zero323@users.noreply.github.com>
      
      Closes #18049 from zero323/SPARK-20830.
      215281d8
    • sjarvie's avatar
      [SPARK-21125][PYTHON] Extend setJobDescription to PySpark and JavaSpark APIs · ba78514d
      sjarvie authored
      ## What changes were proposed in this pull request?
      
      Extend setJobDescription to PySpark and JavaSpark APIs
      
      SPARK-21125
      
      ## How was this patch tested?
      
      Testing was done by running a local Spark shell on the built UI. I originally had added a unit test but the PySpark context cannot easily access the Scala Spark Context's private variable with the Job Description key so I omitted the test, due to the simplicity of this addition.
      
      Also ran the existing tests.
      
      # Misc
      
      This contribution is my original work and that I license the work to the project under the project's open source license.
      
      Author: sjarvie <sjarvie@uber.com>
      
      Closes #18332 from sjarvie/add_python_set_job_description.
      ba78514d
    • hyukjinkwon's avatar
      [SPARK-21147][SS] Throws an analysis exception when a user-specified schema is... · 7a00c658
      hyukjinkwon authored
      [SPARK-21147][SS] Throws an analysis exception when a user-specified schema is given in socket/rate sources
      
      ## What changes were proposed in this pull request?
      
      This PR proposes to throw an exception if a schema is provided by user to socket source as below:
      
      **socket source**
      
      ```scala
      import org.apache.spark.sql.types._
      
      val userSpecifiedSchema = StructType(
        StructField("name", StringType) ::
        StructField("area", StringType) :: Nil)
      val df = spark.readStream.format("socket").option("host", "localhost").option("port", 9999).schema(userSpecifiedSchema).load
      df.printSchema
      ```
      
      Before
      
      ```
      root
       |-- value: string (nullable = true)
      ```
      
      After
      
      ```
      org.apache.spark.sql.AnalysisException: The socket source does not support a user-specified schema.;
        at org.apache.spark.sql.execution.streaming.TextSocketSourceProvider.sourceSchema(socket.scala:199)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:192)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:87)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:87)
        at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:30)
        at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:150)
        ... 50 elided
      ```
      
      **rate source**
      
      ```scala
      spark.readStream.format("rate").schema(spark.range(1).schema).load().printSchema()
      ```
      
      Before
      
      ```
      root
       |-- timestamp: timestamp (nullable = true)
       |-- value: long (nullable = true)`
      ```
      
      After
      
      ```
      org.apache.spark.sql.AnalysisException: The rate source does not support a user-specified schema.;
        at org.apache.spark.sql.execution.streaming.RateSourceProvider.sourceSchema(RateSourceProvider.scala:57)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:192)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:87)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:87)
        at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:30)
        at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:150)
        ... 48 elided
      ```
      
      ## How was this patch tested?
      
      Unit test in `TextSocketStreamSuite` and `RateSourceSuite`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18365 from HyukjinKwon/SPARK-21147.
      7a00c658
    • actuaryzhang's avatar
      [SPARK-20917][ML][SPARKR] SparkR supports string encoding consistent with R · ad459cfb
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      
      Add `stringIndexerOrderType` to `spark.glm` and `spark.survreg` to support string encoding that is consistent with default R.
      
      ## How was this patch tested?
      new tests
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #18140 from actuaryzhang/sparkRFormula.
      ad459cfb
    • Xingbo Jiang's avatar
      [SPARK-17851][SQL][TESTS] Make sure all test sqls in catalyst pass checkAnalysis · cad88f17
      Xingbo Jiang authored
      ## What changes were proposed in this pull request?
      
      Currently we have several tens of test sqls in catalyst will fail at `SimpleAnalyzer.checkAnalysis`, we should make sure they are valid.
      
      This PR makes the following changes:
      1. Apply `checkAnalysis` on plans that tests `Optimizer` rules, but don't require the testcases for `Parser`/`Analyzer` pass `checkAnalysis`;
      2. Fix testcases for `Optimizer` that would have fall.
      ## How was this patch tested?
      
      Apply `SimpleAnalyzer.checkAnalysis` on plans in `PlanTest.comparePlans`, update invalid test cases.
      
      Author: Xingbo Jiang <xingbo.jiang@databricks.com>
      Author: jiangxingbo <jiangxb1987@gmail.com>
      
      Closes #15417 from jiangxb1987/cptest.
      cad88f17
    • Marcos P's avatar
      [MINOR][DOC] modified issue link and updated status · e92befcb
      Marcos P authored
      ## What changes were proposed in this pull request?
      
      This PR aims to clarify some outdated comments that i found at **spark-catalyst** and **spark-sql** pom files. Maven bug still happening and in order to track it I have updated the issue link and also the status of the issue.
      
      Author: Marcos P <mpenate@stratio.com>
      
      Closes #18374 from mpenate/fix/mng-3559-comment.
      e92befcb
    • Yuming Wang's avatar
      [MINOR][DOCS] Add lost <tr> tag for configuration.md · 987eb8fa
      Yuming Wang authored
      ## What changes were proposed in this pull request?
      
      Add lost `<tr>` tag for `configuration.md`.
      
      ## How was this patch tested?
      N/A
      
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #18372 from wangyum/docs-missing-tr.
      987eb8fa
    • Li Yichao's avatar
      [SPARK-20640][CORE] Make rpc timeout and retry for shuffle registration configurable. · d107b3b9
      Li Yichao authored
      ## What changes were proposed in this pull request?
      
      Currently the shuffle service registration timeout and retry has been hardcoded. This works well for small workloads but under heavy workload when the shuffle service is busy transferring large amount of data we see significant delay in responding to the registration request, as a result we often see the executors fail to register with the shuffle service, eventually failing the job. We need to make these two parameters configurable.
      
      ## How was this patch tested?
      
      * Updated `BlockManagerSuite` to test registration timeout and max attempts configuration actually works.
      
      cc sitalkedia
      
      Author: Li Yichao <lyc@zhihu.com>
      
      Closes #18092 from liyichao/SPARK-20640.
      d107b3b9
    • sureshthalamati's avatar
      [SPARK-10655][SQL] Adding additional data type mappings to jdbc DB2dialect. · 9ce714dc
      sureshthalamati authored
      This patch adds DB2 specific data type mappings for decfloat, real, xml , and timestamp with time zone (DB2Z specific type)  types on read and for byte, short data types  on write to the to jdbc data source DB2 dialect. Default mapping does not work for these types when reading/writing from DB2 database.
      
      Added docker test, and a JDBC unit test case.
      
      Author: sureshthalamati <suresh.thalamati@gmail.com>
      
      Closes #9162 from sureshthalamati/db2dialect_enhancements-spark-10655.
      9ce714dc
  4. Jun 20, 2017
    • Reynold Xin's avatar
      [SPARK-21103][SQL] QueryPlanConstraints should be part of LogicalPlan · b6b10882
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      QueryPlanConstraints should be part of LogicalPlan, rather than QueryPlan, since the constraint framework is only used for query plan rewriting and not for physical planning.
      
      ## How was this patch tested?
      Should be covered by existing tests, since it is a simple refactoring.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #18310 from rxin/SPARK-21103.
      b6b10882
    • Wenchen Fan's avatar
      [SPARK-21150][SQL] Persistent view stored in Hive metastore should be case preserving · e862dc90
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      This is a regression in Spark 2.2. In Spark 2.2, we introduced a new way to resolve persisted view: https://issues.apache.org/jira/browse/SPARK-18209 , but this makes the persisted view non case-preserving because we store the schema in hive metastore directly. We should follow data source table and store schema in table properties.
      
      ## How was this patch tested?
      
      new regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #18360 from cloud-fan/view.
      e862dc90
    • Xingbo Jiang's avatar
      [SPARK-20989][CORE] Fail to start multiple workers on one host if external... · ef162289
      Xingbo Jiang authored
      [SPARK-20989][CORE] Fail to start multiple workers on one host if external shuffle service is enabled in standalone mode
      
      ## What changes were proposed in this pull request?
      
      In standalone mode, if we enable external shuffle service by setting `spark.shuffle.service.enabled` to true, and then we try to start multiple workers on one host(by setting `SPARK_WORKER_INSTANCES=3` in spark-env.sh, and then run `sbin/start-slaves.sh`), we can only launch one worker on each host successfully and the rest of the workers fail to launch.
      The reason is the port of external shuffle service if configed by `spark.shuffle.service.port`, so currently we could start no more than one external shuffle service on each host. In our case, each worker tries to start a external shuffle service, and only one of them succeeded doing this.
      
      We should give explicit reason of failure instead of fail silently.
      
      ## How was this patch tested?
      Manually test by the following steps:
      1. SET `SPARK_WORKER_INSTANCES=1` in `conf/spark-env.sh`;
      2. SET `spark.shuffle.service.enabled` to `true` in `conf/spark-defaults.conf`;
      3. Run `sbin/start-all.sh`.
      
      Before the change, you will see no error in the command line, as the following:
      ```
      starting org.apache.spark.deploy.master.Master, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.master.Master-1-xxx.local.out
      localhost: starting org.apache.spark.deploy.worker.Worker, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-1-xxx.local.out
      localhost: starting org.apache.spark.deploy.worker.Worker, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-2-xxx.local.out
      localhost: starting org.apache.spark.deploy.worker.Worker, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-3-xxx.local.out
      ```
      And you can see in the webUI that only one worker is running.
      
      After the change, you get explicit error messages in the command line:
      ```
      starting org.apache.spark.deploy.master.Master, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.master.Master-1-xxx.local.out
      localhost: starting org.apache.spark.deploy.worker.Worker, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-1-xxx.local.out
      localhost: failed to launch: nice -n 0 /Users/xxx/workspace/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://xxx.local:7077
      localhost:   17/06/13 23:24:53 INFO SecurityManager: Changing view acls to: xxx
      localhost:   17/06/13 23:24:53 INFO SecurityManager: Changing modify acls to: xxx
      localhost:   17/06/13 23:24:53 INFO SecurityManager: Changing view acls groups to:
      localhost:   17/06/13 23:24:53 INFO SecurityManager: Changing modify acls groups to:
      localhost:   17/06/13 23:24:53 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(xxx); groups with view permissions: Set(); users  with modify permissions: Set(xxx); groups with modify permissions: Set()
      localhost:   17/06/13 23:24:54 INFO Utils: Successfully started service 'sparkWorker' on port 63354.
      localhost:   Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Start multiple worker on one host failed because we may launch no more than one external shuffle service on each host, please set spark.shuffle.service.enabled to false or set SPARK_WORKER_INSTANCES to 1 to resolve the conflict.
      localhost:   	at scala.Predef$.require(Predef.scala:224)
      localhost:   	at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:752)
      localhost:   	at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
      localhost: full log in /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-1-xxx.local.out
      localhost: starting org.apache.spark.deploy.worker.Worker, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-2-xxx.local.out
      localhost: failed to launch: nice -n 0 /Users/xxx/workspace/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8082 spark://xxx.local:7077
      localhost:   17/06/13 23:24:56 INFO SecurityManager: Changing view acls to: xxx
      localhost:   17/06/13 23:24:56 INFO SecurityManager: Changing modify acls to: xxx
      localhost:   17/06/13 23:24:56 INFO SecurityManager: Changing view acls groups to:
      localhost:   17/06/13 23:24:56 INFO SecurityManager: Changing modify acls groups to:
      localhost:   17/06/13 23:24:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(xxx); groups with view permissions: Set(); users  with modify permissions: Set(xxx); groups with modify permissions: Set()
      localhost:   17/06/13 23:24:56 INFO Utils: Successfully started service 'sparkWorker' on port 63359.
      localhost:   Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Start multiple worker on one host failed because we may launch no more than one external shuffle service on each host, please set spark.shuffle.service.enabled to false or set SPARK_WORKER_INSTANCES to 1 to resolve the conflict.
      localhost:   	at scala.Predef$.require(Predef.scala:224)
      localhost:   	at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:752)
      localhost:   	at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
      localhost: full log in /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-2-xxx.local.out
      localhost: starting org.apache.spark.deploy.worker.Worker, logging to /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-3-xxx.local.out
      localhost: failed to launch: nice -n 0 /Users/xxx/workspace/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8083 spark://xxx.local:7077
      localhost:   17/06/13 23:24:59 INFO SecurityManager: Changing view acls to: xxx
      localhost:   17/06/13 23:24:59 INFO SecurityManager: Changing modify acls to: xxx
      localhost:   17/06/13 23:24:59 INFO SecurityManager: Changing view acls groups to:
      localhost:   17/06/13 23:24:59 INFO SecurityManager: Changing modify acls groups to:
      localhost:   17/06/13 23:24:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(xxx); groups with view permissions: Set(); users  with modify permissions: Set(xxx); groups with modify permissions: Set()
      localhost:   17/06/13 23:24:59 INFO Utils: Successfully started service 'sparkWorker' on port 63360.
      localhost:   Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Start multiple worker on one host failed because we may launch no more than one external shuffle service on each host, please set spark.shuffle.service.enabled to false or set SPARK_WORKER_INSTANCES to 1 to resolve the conflict.
      localhost:   	at scala.Predef$.require(Predef.scala:224)
      localhost:   	at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:752)
      localhost:   	at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
      localhost: full log in /Users/xxx/workspace/spark/logs/spark-xxx-org.apache.spark.deploy.worker.Worker-3-xxx.local.out
      ```
      
      Author: Xingbo Jiang <xingbo.jiang@databricks.com>
      
      Closes #18290 from jiangxb1987/start-slave.
      ef162289
    • Joseph K. Bradley's avatar
      [SPARK-20929][ML] LinearSVC should use its own threshold param · cc67bd57
      Joseph K. Bradley authored
      ## What changes were proposed in this pull request?
      
      LinearSVC should use its own threshold param, rather than the shared one, since it applies to rawPrediction instead of probability.  This PR changes the param in the Scala, Python and R APIs.
      
      ## How was this patch tested?
      
      New unit test to make sure the threshold can be set to any Double value.
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #18151 from jkbradley/ml-2.2-linearsvc-cleanup.
      cc67bd57
  5. Jun 19, 2017
    • actuaryzhang's avatar
      [SPARK-20889][SPARKR] Grouped documentation for AGGREGATE column methods · 8965fe76
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      Grouped documentation for the aggregate functions for Column.
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #18025 from actuaryzhang/sparkRDoc4.
      8965fe76
    • Yuming Wang's avatar
      [SPARK-21133][CORE] Fix HighlyCompressedMapStatus#writeExternal throws NPE · 9b57cd8d
      Yuming Wang authored
      ## What changes were proposed in this pull request?
      
      Fix HighlyCompressedMapStatus#writeExternal NPE:
      ```
      17/06/18 15:00:27 ERROR Utils: Exception encountered
      java.lang.NullPointerException
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply$mcV$sp(MapStatus.scala:171)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply(MapStatus.scala:167)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply(MapStatus.scala:167)
              at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1303)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus.writeExternal(MapStatus.scala:167)
              at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1459)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1430)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
              at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
              at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply$mcV$sp(MapOutputTracker.scala:617)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:616)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:616)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1337)
              at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:619)
              at org.apache.spark.MapOutputTrackerMaster.getSerializedMapOutputStatuses(MapOutputTracker.scala:562)
              at org.apache.spark.MapOutputTrackerMaster$MessageLoop.run(MapOutputTracker.scala:351)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:745)
      17/06/18 15:00:27 ERROR MapOutputTrackerMaster: java.lang.NullPointerException
      java.io.IOException: java.lang.NullPointerException
              at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1310)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus.writeExternal(MapStatus.scala:167)
              at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1459)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1430)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
              at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
              at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply$mcV$sp(MapOutputTracker.scala:617)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:616)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:616)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1337)
              at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:619)
              at org.apache.spark.MapOutputTrackerMaster.getSerializedMapOutputStatuses(MapOutputTracker.scala:562)
              at org.apache.spark.MapOutputTrackerMaster$MessageLoop.run(MapOutputTracker.scala:351)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:745)
      Caused by: java.lang.NullPointerException
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply$mcV$sp(MapStatus.scala:171)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply(MapStatus.scala:167)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply(MapStatus.scala:167)
              at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1303)
              ... 17 more
      17/06/18 15:00:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.17.47.20:50188
      17/06/18 15:00:27 ERROR Utils: Exception encountered
      java.lang.NullPointerException
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply$mcV$sp(MapStatus.scala:171)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply(MapStatus.scala:167)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus$$anonfun$writeExternal$2.apply(MapStatus.scala:167)
              at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1303)
              at org.apache.spark.scheduler.HighlyCompressedMapStatus.writeExternal(MapStatus.scala:167)
              at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1459)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1430)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
              at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
              at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply$mcV$sp(MapOutputTracker.scala:617)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:616)
              at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:616)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1337)
              at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:619)
              at org.apache.spark.MapOutputTrackerMaster.getSerializedMapOutputStatuses(MapOutputTracker.scala:562)
              at org.apache.spark.MapOutputTrackerMaster$MessageLoop.run(MapOutputTracker.scala:351)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:745)
      ```
      
      ## How was this patch tested?
      
      manual tests
      
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #18343 from wangyum/SPARK-21133.
      9b57cd8d
    • Marcelo Vanzin's avatar
      [INFRA] Close stale PRs. · 9eacc5e4
      Marcelo Vanzin authored
      Closes #18311
      Closes #18278
      9eacc5e4
    • sharkdtu's avatar
      [SPARK-21138][YARN] Cannot delete staging dir when the clusters of... · 3d4d11a8
      sharkdtu authored
      [SPARK-21138][YARN] Cannot delete staging dir when the clusters of "spark.yarn.stagingDir" and "spark.hadoop.fs.defaultFS" are different
      
      ## What changes were proposed in this pull request?
      
      When I set different clusters for "spark.hadoop.fs.defaultFS" and "spark.yarn.stagingDir" as follows:
      ```
      spark.hadoop.fs.defaultFS  hdfs://tl-nn-tdw.tencent-distribute.com:54310
      spark.yarn.stagingDir hdfs://ss-teg-2-v2/tmp/spark
      ```
      The staging dir can not be deleted, it will prompt following message:
      ```
      java.lang.IllegalArgumentException: Wrong FS: hdfs://ss-teg-2-v2/tmp/spark/.sparkStaging/application_1496819138021_77618, expected: hdfs://tl-nn-tdw.tencent-distribute.com:54310
      ```
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: sharkdtu <sharkdtu@tencent.com>
      
      Closes #18352 from sharkdtu/master.
      3d4d11a8
    • Marcelo Vanzin's avatar
      [SPARK-21124][UI] Show correct application user in UI. · 581565dd
      Marcelo Vanzin authored
      The jobs page currently shows the application user, but it assumes
      the OS user is the same as the user running the application, which
      may not be true in all scenarios (e.g., kerberos). While it might be
      useful to show both in the UI, this change just chooses the application
      user over the OS user, since the latter can be found in the environment
      page if needed.
      
      Tested in live application and in history server.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #18331 from vanzin/SPARK-21124.
      581565dd
    • Xianyang Liu's avatar
      [MINOR] Fix some typo of the document · 0a4b7e4f
      Xianyang Liu authored
      ## What changes were proposed in this pull request?
      
      Fix some typo of the document.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Xianyang Liu <xianyang.liu@intel.com>
      
      Closes #18350 from ConeyLiu/fixtypo.
      0a4b7e4f
    • Dongjoon Hyun's avatar
      [MINOR][BUILD] Fix Java linter errors · ecc56313
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR cleans up a few Java linter errors for Apache Spark 2.2 release.
      
      ## How was this patch tested?
      
      ```bash
      $ dev/lint-java
      Using `mvn` from path: /usr/local/bin/mvn
      Checkstyle checks passed.
      ```
      
      We can check the result at Travis CI, [here](https://travis-ci.org/dongjoon-hyun/spark/builds/244297894).
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #18345 from dongjoon-hyun/fix_lint_java_2.
      ecc56313
    • Yong Tang's avatar
      [SPARK-19975][PYTHON][SQL] Add map_keys and map_values functions to Python · e5387018
      Yong Tang authored
      ## What changes were proposed in this pull request?
      
      This fix tries to address the issue in SPARK-19975 where we
      have `map_keys` and `map_values` functions in SQL yet there
      is no Python equivalent functions.
      
      This fix adds `map_keys` and `map_values` functions to Python.
      
      ## How was this patch tested?
      
      This fix is tested manually (See Python docs for examples).
      
      Author: Yong Tang <yong.tang.github@outlook.com>
      
      Closes #17328 from yongtang/SPARK-19975.
      e5387018
    • assafmendelson's avatar
      [SPARK-21123][DOCS][STRUCTURED STREAMING] Options for file stream source are in a wrong table · 66a792cd
      assafmendelson authored
      ## What changes were proposed in this pull request?
      
      The description for several options of File Source for structured streaming appeared in the File Sink description instead.
      
      This pull request has two commits: The first includes changes to the version as it appeared in spark 2.1 and the second handled an additional option added for spark 2.2
      
      ## How was this patch tested?
      
      Built the documentation by SKIP_API=1 jekyll build and visually inspected the structured streaming programming guide.
      
      The original documentation was written by tdas and lw-lin
      
      Author: assafmendelson <assaf.mendelson@gmail.com>
      
      Closes #18342 from assafmendelson/spark-21123.
      66a792cd
    • saturday_s's avatar
      [SPARK-19688][STREAMING] Not to read `spark.yarn.credentials.file` from checkpoint. · e92ffe6f
      saturday_s authored
      ## What changes were proposed in this pull request?
      
      Reload the `spark.yarn.credentials.file` property when restarting a streaming application from checkpoint.
      
      ## How was this patch tested?
      
      Manual tested with 1.6.3 and 2.1.1.
      I didn't test this with master because of some compile problems, but I think it will be the same result.
      
      ## Notice
      
      This should be merged into maintenance branches too.
      
      jira: [SPARK-21008](https://issues.apache.org/jira/browse/SPARK-21008)
      
      Author: saturday_s <shi.indetail@gmail.com>
      
      Closes #18230 from saturday-shi/SPARK-21008.
      e92ffe6f
    • hyukjinkwon's avatar
      [MINOR] Bump SparkR and PySpark version to 2.3.0. · 9a145fd7
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      #17753 bumps master branch version to 2.3.0-SNAPSHOT, but it seems SparkR and PySpark version were omitted.
      
      ditto of https://github.com/apache/spark/pull/16488 / https://github.com/apache/spark/pull/17523
      
      ## How was this patch tested?
      
      N/A
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18341 from HyukjinKwon/r-version.
      9a145fd7
Loading