Skip to content
Snippets Groups Projects
  1. May 23, 2017
    • Shivaram Venkataraman's avatar
      [SPARK-20727] Skip tests that use Hadoop utils on CRAN Windows · d06610f9
      Shivaram Venkataraman authored
      ## What changes were proposed in this pull request?
      
      This change skips tests that use the Hadoop libraries while running
      on CRAN check with Windows as the operating system. This is to handle
      cases where the Hadoop winutils binaries are missing on the target
      system. The skipped tests consist of
      1. Tests that save, load a model in MLlib
      2. Tests that save, load CSV, JSON and Parquet files in SQL
      3. Hive tests
      
      ## How was this patch tested?
      
      Tested by running on a local windows VM with HADOOP_HOME unset. Also testing with https://win-builder.r-project.org
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #17966 from shivaram/sparkr-windows-cran.
      d06610f9
  2. May 22, 2017
    • James Shuster's avatar
      [SPARK-20815][SPARKR] NullPointerException in RPackageUtils#checkManifestForR · 4dbb63f0
      James Shuster authored
      ## What changes were proposed in this pull request?
      
      - Add a null check to RPackageUtils#checkManifestForR so that jars w/o manifests don't NPE.
      
      ## How was this patch tested?
      
      - Unit tests and manual tests.
      
      Author: James Shuster <jshuster@palantir.com>
      
      Closes #18040 from jrshust/feature/r-package-utils.
      4dbb63f0
    • Xiao Li's avatar
      [SPARK-17410][SPARK-17284] Move Hive-generated Stats Info to HiveClientImpl · a2460be9
      Xiao Li authored
      ### What changes were proposed in this pull request?
      
      After we adding a new field `stats` into `CatalogTable`, we should not expose Hive-specific Stats metadata to `MetastoreRelation`. It complicates all the related codes. It also introduces a bug in `SHOW CREATE TABLE`. The statistics-related table properties should be skipped by `SHOW CREATE TABLE`, since it could be incorrect in the newly created table. See the Hive JIRA: https://issues.apache.org/jira/browse/HIVE-13792
      
      Also fix the issue to fill Hive-generated RowCounts to our stats.
      
      This PR is to handle Hive-specific Stats metadata in `HiveClientImpl`.
      ### How was this patch tested?
      
      Added a few test cases.
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #14971 from gatorsmile/showCreateTableNew.
      a2460be9
    • Yuming Wang's avatar
      [SPARK-20751][SQL][FOLLOWUP] Add cot test in MathExpressionsSuite · 9b091019
      Yuming Wang authored
      ## What changes were proposed in this pull request?
      
      Add cot test in MathExpressionsSuite as https://github.com/apache/spark/pull/17999#issuecomment-302832794.
      
      ## How was this patch tested?
      
      unit tests
      
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #18039 from wangyum/SPARK-20751-test.
      9b091019
    • Marcelo Vanzin's avatar
      [SPARK-20814][MESOS] Restore support for spark.executor.extraClassPath. · df64fa79
      Marcelo Vanzin authored
      Restore code that was removed as part of SPARK-17979, but instead of
      using the deprecated env variable name to propagate the class path, use
      a new one.
      
      Verified by running "./bin/spark-class o.a.s.executor.CoarseGrainedExecutorBackend"
      manually.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #18037 from vanzin/SPARK-20814.
      df64fa79
    • Zheng RuiFeng's avatar
      [SPARK-15767][ML][SPARKR] Decision Tree wrapper in SparkR · 4be33758
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      support decision tree in R
      
      ## How was this patch tested?
      added tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #17981 from zhengruifeng/dt_r.
      4be33758
    • Mark Grover's avatar
      [SPARK-20756][YARN] yarn-shuffle jar references unshaded guava · 36309110
      Mark Grover authored
      and contains scala classes
      
      ## What changes were proposed in this pull request?
      This change ensures that all references to guava from within the yarn shuffle jar pointed to the shaded guava class already provided in the jar.
      
      Also, it explicitly excludes scala classes from being added to the jar.
      
      ## How was this patch tested?
      Ran unit tests on the module and they passed.
      javap now returns the expected result - reference to the shaded guava under `org/spark_project` (previously this was referring to `com.google...`
      ```
      javap -cp common/network-yarn/target/scala-2.11/spark-2.3.0-SNAPSHOT-yarn-shuffle.jar -c org/apache/spark/network/yarn/YarnShuffleService | grep Lists
            57: invokestatic  #138                // Method org/spark_project/guava/collect/Lists.newArrayList:()Ljava/util/ArrayList;
      ```
      
      Guava is still shaded in the jar:
      ```
      jar -tf common/network-yarn/target/scala-2.11/spark-2.3.0-SNAPSHOT-yarn-shuffle.jar | grep guava | head
      META-INF/maven/com.google.guava/
      META-INF/maven/com.google.guava/guava/
      META-INF/maven/com.google.guava/guava/pom.properties
      META-INF/maven/com.google.guava/guava/pom.xml
      org/spark_project/guava/
      org/spark_project/guava/annotations/
      org/spark_project/guava/annotations/Beta.class
      org/spark_project/guava/annotations/GwtCompatible.class
      org/spark_project/guava/annotations/GwtIncompatible.class
      org/spark_project/guava/annotations/VisibleForTesting.class
      ```
      (not sure if the above META-INF/* is a problem or not)
      
      I took this jar, deployed it on a yarn cluster with shuffle service enabled, and made sure the YARN node managers came up. An application with a shuffle was run and it succeeded.
      
      Author: Mark Grover <mark@apache.org>
      
      Closes #17990 from markgrover/spark-20756.
      36309110
    • Peng's avatar
      [SPARK-20764][ML][PYSPARK] Fix visibility discrepancy with numInstances and... · cfca0113
      Peng authored
      [SPARK-20764][ML][PYSPARK] Fix visibility discrepancy with numInstances and degreesOfFreedom in LR and GLR - Python version
      
      ## What changes were proposed in this pull request?
      
      SPARK-20097 exposed degreesOfFreedom in LinearRegressionSummary and numInstances in GeneralizedLinearRegressionSummary. Python API should be updated to reflect these changes.
      
      ## How was this patch tested?
      The existing UT
      
      Author: Peng <peng.meng@intel.com>
      
      Closes #18062 from mpjlu/spark-20764.
      cfca0113
    • gatorsmile's avatar
      [SPARK-20831][SQL] Fix INSERT OVERWRITE data source tables with IF NOT EXISTS · f3ed62a3
      gatorsmile authored
      ### What changes were proposed in this pull request?
      Currently, we have a bug when we specify `IF NOT EXISTS` in `INSERT OVERWRITE` data source tables. For example, given a query:
      ```SQL
      INSERT OVERWRITE TABLE $tableName partition (b=2, c=3) IF NOT EXISTS SELECT 9, 10
      ```
      we will get the following error:
      ```
      unresolved operator 'InsertIntoTable Relation[a#425,d#426,b#427,c#428] parquet, Map(b -> Some(2), c -> Some(3)), true, true;;
      'InsertIntoTable Relation[a#425,d#426,b#427,c#428] parquet, Map(b -> Some(2), c -> Some(3)), true, true
      +- Project [cast(9#423 as int) AS a#429, cast(10#424 as int) AS d#430]
         +- Project [9 AS 9#423, 10 AS 10#424]
            +- OneRowRelation$
      ```
      
      This PR is to fix the issue to follow the behavior of Hive serde tables
      > INSERT OVERWRITE will overwrite any existing data in the table or partition unless IF NOT EXISTS is provided for a partition
      
      ### How was this patch tested?
      Modified an existing test case
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #18050 from gatorsmile/insertPartitionIfNotExists.
      f3ed62a3
    • jinxing's avatar
      [SPARK-20801] Record accurate size of blocks in MapStatus when it's above threshold. · 2597674b
      jinxing authored
      ## What changes were proposed in this pull request?
      
      Currently, when number of reduces is above 2000, HighlyCompressedMapStatus is used to store size of blocks. in HighlyCompressedMapStatus, only average size is stored for non empty blocks. Which is not good for memory control when we shuffle blocks. It makes sense to store the accurate size of block when it's above threshold.
      
      ## How was this patch tested?
      
      Added test in MapStatusSuite.
      
      Author: jinxing <jinxing6042@126.com>
      
      Closes #18031 from jinxing64/SPARK-20801.
      2597674b
    • John Lee's avatar
      [SPARK-20813][WEB UI] Fixed Web UI executor page tab search by status not working · aea73be1
      John Lee authored
      ## What changes were proposed in this pull request?
      On status column of the table, I removed the condition  that forced only the display value to take on values Active, Blacklisted and Dead.
      
      Before the removal, values used for sort and filter for that particular column was True and False.
      ## How was this patch tested?
      
      Tested with Active, Blacklisted and Dead present as current status.
      
      Author: John Lee <jlee2@yahoo-inc.com>
      
      Closes #18036 from yoonlee95/SPARK-20813.
      aea73be1
    • caoxuewen's avatar
      [SPARK-20609][CORE] Run the SortShuffleSuite unit tests have residual spark_* system directory · f1ffc6e7
      caoxuewen authored
      ## What changes were proposed in this pull request?
      This PR solution to run the SortShuffleSuite unit tests have residual spark_* system directory
      For example:
      OS:Windows 7
      After the running SortShuffleSuite unit tests,
      the system of TMP directory have '..\AppData\Local\Temp\spark-f64121f9-11b4-4ffd-a4f0-cfca66643503' not deleted
      
      ## How was this patch tested?
      Run SortShuffleSuite unit test.
      
      Author: caoxuewen <cao.xuewen@zte.com.cn>
      
      Closes #17869 from heary-cao/SortShuffleSuite.
      f1ffc6e7
    • fjh100456's avatar
      [SPARK-20591][WEB UI] Succeeded tasks num not equal in all jobs page and job... · 190d8b0b
      fjh100456 authored
      [SPARK-20591][WEB UI] Succeeded tasks num not equal in all jobs page and job detail page on spark web ui when speculative task(s) exist.
      
      ## What changes were proposed in this pull request?
      
      Modified succeeded num in job detail page from "completed = stageData.completedIndices.size" to "completed = stageData.numCompleteTasks",which making succeeded tasks num in all jobs page and job detail page look more consistent, and more easily to find which stages the speculative task(s) were in.
      
      ## How was this patch tested?
      
      manual tests
      
      Author: fjh100456 <fu.jinhua6@zte.com.cn>
      
      Closes #17923 from fjh100456/master.
      190d8b0b
    • Nick Pentreath's avatar
      [SPARK-20506][DOCS] Add HTML links to highlight list in MLlib guide for 2.2 · be846db4
      Nick Pentreath authored
      Quick follow up to #17996 - forgot to add the HTML links to the relevant sections of the guide in the highlights list.
      
      ## How was this patch tested?
      
      Built docs locally and tested links.
      
      Author: Nick Pentreath <nickp@za.ibm.com>
      
      Closes #18043 from MLnick/SPARK-20506-2.2-migration-guide-2.
      be846db4
    • Ignacio Bermudez's avatar
      [SPARK-20687][MLLIB] mllib.Matrices.fromBreeze may crash when converting from Breeze sparse matrix · 06dda1d5
      Ignacio Bermudez authored
      ## What changes were proposed in this pull request?
      
      When two Breeze SparseMatrices are operated, the result matrix may contain provisional 0 values extra in rowIndices and data arrays. This causes an incoherence with the colPtrs data, but Breeze get away with this incoherence by keeping a counter of the valid data.
      
      In spark, when this matrices are converted to SparseMatrices, Sparks relies solely on rowIndices, data, and colPtrs, but these might be incorrect because of breeze internal hacks. Therefore, we need to slice both rowIndices and data, using their counter of active data
      
      This method is at least called by BlockMatrix when performing distributed block operations, causing exceptions on valid operations.
      
      See http://stackoverflow.com/questions/33528555/error-thrown-when-using-blockmatrix-add
      
      ## How was this patch tested?
      
      Added a test to MatricesSuite that verifies that the conversions are valid and that code doesn't crash. Originally the same code would crash on Spark.
      
      Bugfix for https://issues.apache.org/jira/browse/SPARK-20687
      
      Author: Ignacio Bermudez <ignaciobermudez@gmail.com>
      Author: Ignacio Bermudez Corrales <icorrales@splunk.com>
      
      Closes #17940 from ghoto/bug-fix/SPARK-20687.
      06dda1d5
    • Michal Senkyr's avatar
      [SPARK-19089][SQL] Add support for nested sequences · a2b3b676
      Michal Senkyr authored
      ## What changes were proposed in this pull request?
      
      Replaced specific sequence encoders with generic sequence encoder to enable nesting of sequences.
      
      Does not add support for nested arrays as that cannot be solved in this way.
      
      ## How was this patch tested?
      
      ```bash
      build/mvn -DskipTests clean package && dev/run-tests
      ```
      
      Additionally in Spark shell:
      
      ```
      scala> Seq(Seq(Seq(1))).toDS.collect()
      res0: Array[Seq[Seq[Int]]] = Array(List(List(1)))
      ```
      
      Author: Michal Senkyr <mike.senkyr@gmail.com>
      
      Closes #18011 from michalsenkyr/dataset-seq-nested.
      a2b3b676
    • Kazuaki Ishizaki's avatar
      [SPARK-20770][SQL] Improve ColumnStats · 833c8d41
      Kazuaki Ishizaki authored
      ## What changes were proposed in this pull request?
      
      This PR improves the implementation of `ColumnStats` by using the following appoaches.
      
      1. Declare subclasses of `ColumnStats` as `final`
      2. Remove unnecessary call of `row.isNullAt(ordinal)`
      3. Remove the dependency on `GenericInternalRow`
      
      For 1., this declaration encourages method inlining and other optimizations of JIT compiler
      For 2., in `gatherStats()`, while previous code in subclasses of `ColumnStats` always calls `row.isNullAt()` twice, the PR just calls `row.isNullAt()` only once.
      For 3., `collectedStatistics()` returns `Array[Any]` instead of `GenericInternalRow`. This removes the dependency of unnecessary package and reduces the number of allocations of `GenericInternalRow`.
      
      In addition to that, in the future, `gatherValueStats()`, which is specialized for each data type, can be effectively called from the generated code without using generic data structure `InternalRow`.
      
      ## How was this patch tested?
      
      Tested by existing test suite
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #18002 from kiszk/SPARK-20770.
      833c8d41
    • caoxuewen's avatar
      [SPARK-20786][SQL] Improve ceil and floor handle the value which is not expected · 3c9eef35
      caoxuewen authored
      ## What changes were proposed in this pull request?
      
      spark-sql>SELECT ceil(1234567890123456);
      1234567890123456
      
      spark-sql>SELECT ceil(12345678901234567);
      12345678901234568
      
      spark-sql>SELECT ceil(123456789012345678);
      123456789012345680
      
      when the length of the getText is greater than 16. long to double will be precision loss.
      
      but mysql handle the value is ok.
      
      mysql> SELECT ceil(1234567890123456);
      +------------------------+
      | ceil(1234567890123456) |
      +------------------------+
      |       1234567890123456 |
      +------------------------+
      1 row in set (0.00 sec)
      
      mysql> SELECT ceil(12345678901234567);
      +-------------------------+
      | ceil(12345678901234567) |
      +-------------------------+
      |       12345678901234567 |
      +-------------------------+
      1 row in set (0.00 sec)
      
      mysql> SELECT ceil(123456789012345678);
      +--------------------------+
      | ceil(123456789012345678) |
      +--------------------------+
      |       123456789012345678 |
      +--------------------------+
      1 row in set (0.00 sec)
      
      ## How was this patch tested?
      
      Supplement the unit test.
      
      Author: caoxuewen <cao.xuewen@zte.com.cn>
      
      Closes #18016 from heary-cao/ceil_long.
      3c9eef35
  3. May 21, 2017
    • Wayne Zhang's avatar
      [SPARK-20736][PYTHON] PySpark StringIndexer supports StringOrderType · 0f2f56c3
      Wayne Zhang authored
      ## What changes were proposed in this pull request?
      PySpark StringIndexer supports StringOrderType added in #17879.
      
      Author: Wayne Zhang <actuaryzhang@uber.com>
      
      Closes #17978 from actuaryzhang/PythonStringIndexer.
      0f2f56c3
    • Tathagata Das's avatar
      [SPARK-20792][SS] Support same timeout operations in mapGroupsWithState... · 9d6661c8
      Tathagata Das authored
      [SPARK-20792][SS] Support same timeout operations in mapGroupsWithState function in batch queries as in streaming queries
      
      ## What changes were proposed in this pull request?
      
      Currently, in the batch queries, timeout is disabled (i.e. GroupStateTimeout.NoTimeout) which means any GroupState.setTimeout*** operation would throw UnsupportedOperationException. This makes it weird when converting a streaming query into a batch query by changing the input DF from streaming to a batch DF. If the timeout was enabled and used, then the batch query will start throwing UnsupportedOperationException.
      
      This PR creates the dummy state in batch queries with the provided timeoutConf so that it behaves in the same way. The code has been refactored to make it obvious when the state is being created for a batch query or a streaming query.
      
      ## How was this patch tested?
      Additional tests
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #18024 from tdas/SPARK-20792.
      9d6661c8
  4. May 20, 2017
  5. May 19, 2017
    • liuzhaokun's avatar
      [SPARK-20781] the location of Dockerfile in docker.properties.templat is wrong · 749418d2
      liuzhaokun authored
      [https://issues.apache.org/jira/browse/SPARK-20781](https://issues.apache.org/jira/browse/SPARK-20781)
      the location of Dockerfile in docker.properties.template should be "../external/docker/spark-mesos/Dockerfile"
      
      Author: liuzhaokun <liu.zhaokun@zte.com.cn>
      
      Closes #18013 from liu-zhaokun/dockerfile_location.
      749418d2
    • Nick Pentreath's avatar
      [SPARK-20506][DOCS] 2.2 migration guide · b5d8d9ba
      Nick Pentreath authored
      Update ML guide for migration `2.1` -> `2.2` and the previous version migration guide section.
      
      ## How was this patch tested?
      
      Build doc locally.
      
      Author: Nick Pentreath <nickp@za.ibm.com>
      
      Closes #17996 from MLnick/SPARK-20506-2.2-migration-guide.
      b5d8d9ba
    • Wayne Zhang's avatar
      [SPARKR] Fix bad examples in DataFrame methods and style issues · 7f203a24
      Wayne Zhang authored
      ## What changes were proposed in this pull request?
      Some examples in the DataFrame methods are syntactically wrong, even though they are pseudo code. Fix these and some style issues.
      
      Author: Wayne Zhang <actuaryzhang@uber.com>
      
      Closes #18003 from actuaryzhang/sparkRDoc3.
      7f203a24
    • zero323's avatar
      [SPARKR][DOCS][MINOR] Use consistent names in rollup and cube examples · 2d90c04f
      zero323 authored
      ## What changes were proposed in this pull request?
      
      Rename `carsDF` to `df` in SparkR `rollup` and `cube` examples.
      
      ## How was this patch tested?
      
      Manual tests.
      
      Author: zero323 <zero323@users.noreply.github.com>
      
      Closes #17988 from zero323/cube-docs.
      2d90c04f
    • liuxian's avatar
      [SPARK-20763][SQL] The function of `month` and `day` return the value which is not we expected. · ea3b1e35
      liuxian authored
      ## What changes were proposed in this pull request?
      spark-sql>select month("1582-09-28");
      spark-sql>10
      For this case, the expected result is 9, but it is 10.
      
      spark-sql>select day("1582-04-18");
      spark-sql>28
      For this case, the expected result is 18, but it is 28.
      
      when the date  before "1582-10-04", the function of `month` and `day` return the value which is not we expected.
      
      ## How was this patch tested?
      unit tests
      
      Author: liuxian <liu.xian3@zte.com.cn>
      
      Closes #17997 from 10110346/wip_lx_0516.
      ea3b1e35
    • Yuming Wang's avatar
      [SPARK-20751][SQL] Add built-in SQL Function - COT · bff021df
      Yuming Wang authored
      ## What changes were proposed in this pull request?
      
      Add built-in SQL Function - COT.
      
      ## How was this patch tested?
      
      unit tests
      
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #17999 from wangyum/SPARK-20751.
      bff021df
    • liuzhaokun's avatar
      [SPARK-20759] SCALA_VERSION in _config.yml should be consistent with pom.xml · dba2ca2c
      liuzhaokun authored
      [https://issues.apache.org/jira/browse/SPARK-20759](https://issues.apache.org/jira/browse/SPARK-20759)
      SCALA_VERSION in _config.yml is 2.11.7, but 2.11.8 in pom.xml. So I think SCALA_VERSION in _config.yml should be consistent with pom.xml.
      
      Author: liuzhaokun <liu.zhaokun@zte.com.cn>
      
      Closes #17992 from liu-zhaokun/new.
      dba2ca2c
    • caoxuewen's avatar
      [SPARK-20607][CORE] Add new unit tests to ShuffleSuite · f398640d
      caoxuewen authored
      ## What changes were proposed in this pull request?
      
      This PR update to two:
      1.adds the new unit tests.
        testing would be performed when there is no shuffle stage,
        shuffle will not generate the data file and the index files.
      2.Modify the '[SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file' unit test,
        parallelize is 1 but not is 2, Check the index file and delete.
      
      ## How was this patch tested?
      The new unit test.
      
      Author: caoxuewen <cao.xuewen@zte.com.cn>
      
      Closes #17868 from heary-cao/ShuffleSuite.
      f398640d
    • tpoterba's avatar
      [SPARK-20773][SQL] ParquetWriteSupport.writeFields is quadratic in number of fields · 3f2cd51e
      tpoterba authored
      Fix quadratic List indexing in ParquetWriteSupport.
      
      I noticed this function while profiling some code with today. It showed up as a significant factor in a table with twenty columns; with hundreds of columns, it could dominate any other function call.
      
      ## What changes were proposed in this pull request?
      
      The writeFields method iterates from 0 until number of fields, indexing into rootFieldWriters for each element. rootFieldWriters is a List, so indexing is a linear operation. The complexity of the writeFields method is thus quadratic in the number of fields.
      
      Solution: explicitly convert rootFieldWriters to Array (implicitly converted to WrappedArray) for constant-time indexing.
      
      ## How was this patch tested?
      
      This is a one-line change for performance reasons.
      
      Author: tpoterba <tpoterba@broadinstitute.org>
      Author: Tim Poterba <tpoterba@gmail.com>
      
      Closes #18005 from tpoterba/tpoterba-patch-1.
      3f2cd51e
    • Ala Luszczak's avatar
      [SPARK-20798] GenerateUnsafeProjection should check if a value is null before calling the getter · ce8edb8b
      Ala Luszczak authored
      ## What changes were proposed in this pull request?
      
      GenerateUnsafeProjection.writeStructToBuffer() did not honor the assumption that the caller must make sure that a value is not null before using the getter. This could lead to various errors. This change fixes that behavior.
      
      Example of code generated before:
      ```scala
      /* 059 */         final UTF8String fieldName = value.getUTF8String(0);
      /* 060 */         if (value.isNullAt(0)) {
      /* 061 */           rowWriter1.setNullAt(0);
      /* 062 */         } else {
      /* 063 */           rowWriter1.write(0, fieldName);
      /* 064 */         }
      ```
      
      Example of code generated now:
      ```scala
      /* 060 */         boolean isNull1 = value.isNullAt(0);
      /* 061 */         UTF8String value1 = isNull1 ? null : value.getUTF8String(0);
      /* 062 */         if (isNull1) {
      /* 063 */           rowWriter1.setNullAt(0);
      /* 064 */         } else {
      /* 065 */           rowWriter1.write(0, value1);
      /* 066 */         }
      ```
      
      ## How was this patch tested?
      
      Adds GenerateUnsafeProjectionSuite.
      
      Author: Ala Luszczak <ala@databricks.com>
      
      Closes #18030 from ala/fix-generate-unsafe-projection.
      ce8edb8b
  6. May 18, 2017
    • Yash Sharma's avatar
      [DSTREAM][DOC] Add documentation for kinesis retry configurations · 92580bd0
      Yash Sharma authored
      ## What changes were proposed in this pull request?
      
      The changes were merged as part of - https://github.com/apache/spark/pull/17467.
      The documentation was missed somewhere in the review iterations. Adding the documentation where it belongs.
      
      ## How was this patch tested?
      Docs. Not tested.
      
      cc budde , brkyvz
      
      Author: Yash Sharma <ysharma@atlassian.com>
      
      Closes #18028 from yssharma/ysharma/kinesis_retry_docs.
      92580bd0
    • hyukjinkwon's avatar
      [SPARK-20364][SQL] Disable Parquet predicate pushdown for fields having dots in the names · 8fb3d5c6
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This is an alternative workaround by simply avoiding the predicate pushdown for columns having dots in the names. This is an approach different with https://github.com/apache/spark/pull/17680.
      
      The downside of this PR is, literally it does not push down filters on the column having dots in Parquet files at all (both no record level and no rowgroup level) whereas the downside of the approach in that PR, it does not use the Parquet's API properly but in a hacky way to support this case.
      
      I assume we prefer a safe way here by using the Parquet API properly but this does close that PR as we are basically just avoiding here.
      
      This way looks a simple workaround and probably it is fine given the problem looks arguably rather corner cases (although it might end up with reading whole row groups under the hood but either looks not the best).
      
      Currently, if there are dots in the column name, predicate pushdown seems being failed in Parquet.
      
      **With dots**
      
      ```scala
      val path = "/tmp/abcde"
      Seq(Some(1), None).toDF("col.dots").write.parquet(path)
      spark.read.parquet(path).where("`col.dots` IS NOT NULL").show()
      ```
      
      ```
      +--------+
      |col.dots|
      +--------+
      +--------+
      ```
      
      **Without dots**
      
      ```scala
      val path = "/tmp/abcde"
      Seq(Some(1), None).toDF("coldots").write.parquet(path)
      spark.read.parquet(path).where("`coldots` IS NOT NULL").show()
      ```
      
      ```
      +-------+
      |coldots|
      +-------+
      |      1|
      +-------+
      ```
      
      **After**
      
      ```scala
      val path = "/tmp/abcde"
      Seq(Some(1), None).toDF("col.dots").write.parquet(path)
      spark.read.parquet(path).where("`col.dots` IS NOT NULL").show()
      ```
      
      ```
      +--------+
      |col.dots|
      +--------+
      |       1|
      +--------+
      ```
      
      ## How was this patch tested?
      
      Unit tests added in `ParquetFilterSuite`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18000 from HyukjinKwon/SPARK-20364-workaround.
      8fb3d5c6
    • liuzhaokun's avatar
      [SPARK-20796] the location of start-master.sh in spark-standalone.md is wrong · 99452df4
      liuzhaokun authored
      [https://issues.apache.org/jira/browse/SPARK-20796](https://issues.apache.org/jira/browse/SPARK-20796)
      the location of start-master.sh in spark-standalone.md should be "sbin/start-master.sh" rather than "bin/start-master.sh".
      
      Author: liuzhaokun <liu.zhaokun@zte.com.cn>
      
      Closes #18027 from liu-zhaokun/sbin.
      99452df4
    • zuotingbing's avatar
      [SPARK-20779][EXAMPLES] The ASF header placed in an incorrect location in some files. · 4779b86b
      zuotingbing authored
      ## What changes were proposed in this pull request?
      
      The license is not at the top in some files. and it will be best if we update these places of the ASF header to be consistent with other files.
      
      ## How was this patch tested?
      
      manual tests
      
      Author: zuotingbing <zuo.tingbing9@zte.com.cn>
      
      Closes #18012 from zuotingbing/spark-license.
      4779b86b
    • hyukjinkwon's avatar
      [INFRA] Close stale PRs · 5d2750aa
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to close PRs ...
      
        - inactive to the review comments more than a month
        - WIP and inactive more than a month
        - with Jenkins build failure but inactive more than a month
        - suggested to be closed and no comment against that
        - obviously looking inappropriate (e.g., Branch 0.5)
      
      To make sure, I left a comment for each PR about a week ago and I could not have a response back from the author in these PRs below:
      
      Closes #11129
      Closes #12085
      Closes #12162
      Closes #12419
      Closes #12420
      Closes #12491
      Closes #13762
      Closes #13837
      Closes #13851
      Closes #13881
      Closes #13891
      Closes #13959
      Closes #14091
      Closes #14481
      Closes #14547
      Closes #14557
      Closes #14686
      Closes #15594
      Closes #15652
      Closes #15850
      Closes #15914
      Closes #15918
      Closes #16285
      Closes #16389
      Closes #16652
      Closes #16743
      Closes #16893
      Closes #16975
      Closes #17001
      Closes #17088
      Closes #17119
      Closes #17272
      Closes #17971
      
      Added:
      Closes #17778
      Closes #17303
      Closes #17872
      
      ## How was this patch tested?
      
      N/A
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18017 from HyukjinKwon/close-inactive-prs.
      5d2750aa
    • Xingbo Jiang's avatar
      [SPARK-20700][SQL] InferFiltersFromConstraints stackoverflows for query (v2) · b7aac15d
      Xingbo Jiang authored
      ## What changes were proposed in this pull request?
      
      In the previous approach we used `aliasMap` to link an `Attribute` to the expression with potentially the form `f(a, b)`, but we only searched the `expressions` and `children.expressions` for this, which is not enough when an `Alias` may lies deep in the logical plan. In that case, we can't generate the valid equivalent constraint classes and thus we fail at preventing the recursive deductions.
      
      We fix this problem by collecting all `Alias`s from the logical plan.
      
      ## How was this patch tested?
      
      No additional test case is added, but do modified one test case to cover this situation.
      
      Author: Xingbo Jiang <xingbo.jiang@databricks.com>
      
      Closes #18020 from jiangxb1987/inferConstrants.
      b7aac15d
  7. May 17, 2017
Loading