Skip to content
Snippets Groups Projects
  1. Dec 12, 2016
    • Marcelo Vanzin's avatar
      [SPARK-18773][CORE] Make commons-crypto config translation consistent. · bc59951b
      Marcelo Vanzin authored
      This change moves the logic that translates Spark configuration to
      commons-crypto configuration to the network-common module. It also
      extends TransportConf and ConfigProvider to provide the necessary
      interfaces for the translation to work.
      
      As part of the change, I removed SystemPropertyConfigProvider, which
      was mostly used as an "empty config" in unit tests, and adjusted the
      very few tests that required a specific config.
      
      I also changed the config keys for AES encryption to live under the
      "spark.network." namespace, which is more correct than their previous
      names under "spark.authenticate.".
      
      Tested via existing unit test.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #16200 from vanzin/SPARK-18773.
      bc59951b
    • Felix Cheung's avatar
      [SPARK-18810][SPARKR] SparkR install.spark does not work for RCs, snapshots · 8a51cfdc
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      Support overriding the download url (include version directory) in an environment variable, `SPARKR_RELEASE_DOWNLOAD_URL`
      
      ## How was this patch tested?
      
      unit test, manually testing
      - snapshot build url
        - download when spark jar not cached
        - when spark jar is cached
      - RC build url
        - download when spark jar not cached
        - when spark jar is cached
      - multiple cached spark versions
      - starting with sparkR shell
      
      To use this,
      ```
      SPARKR_RELEASE_DOWNLOAD_URL=http://this_is_the_url_to_spark_release_tgz R
      ```
      then in R,
      ```
      library(SparkR) # or specify lib.loc
      sparkR.session()
      ```
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #16248 from felixcheung/rinstallurl.
      8a51cfdc
    • Yuming Wang's avatar
      [SPARK-18681][SQL] Fix filtering to compatible with partition keys of type int · 90abfd15
      Yuming Wang authored
      ## What changes were proposed in this pull request?
      
      Cloudera put `/var/run/cloudera-scm-agent/process/15000-hive-HIVEMETASTORE/hive-site.xml` as the configuration file for the Hive Metastore Server, where `hive.metastore.try.direct.sql=false`. But Spark isn't reading this configuration file and get default value `hive.metastore.try.direct.sql=true`. As mallman said, we should use `getMetaConf` method to obtain the original configuration from Hive Metastore Server. I have tested this method few times and the return value is always consistent with Hive Metastore Server.
      
      ## How was this patch tested?
      
      The existing tests.
      
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #16122 from wangyum/SPARK-18681.
      90abfd15
    • Marcelo Vanzin's avatar
      [SPARK-18752][HIVE] isSrcLocal" value should be set from user query. · 476b34c2
      Marcelo Vanzin authored
      The value of the "isSrcLocal" parameter passed to Hive's loadTable and
      loadPartition methods needs to be set according to the user query (e.g.
      "LOAD DATA LOCAL"), and not the current code that tries to guess what
      it should be.
      
      For existing versions of Hive the current behavior is probably ok, but
      some recent changes in the Hive code changed the semantics slightly,
      making code that sets "isSrcLocal" to "true" incorrectly to do the
      wrong thing. It would end up moving the parent directory of the files
      into the final location, instead of the file themselves, resulting
      in a table that cannot be read.
      
      I modified HiveCommandSuite so that existing "LOAD DATA" tests are run
      both in local and non-local mode, since the semantics are slightly different.
      The tests include a few new checks to make sure the semantics follow
      what Hive describes in its documentation.
      
      Tested with existing unit tests and also ran some Hive integration tests
      with a version of Hive containing the changes that surfaced the problem.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #16179 from vanzin/SPARK-18752.
      476b34c2
    • meknio's avatar
      [SPARK-16297][SQL] Fix mapping Microsoft SQLServer dialect · bf42c2db
      meknio authored
      The problem is if it is run with no fix throws an exception and causes the following error:
      
        "Cannot specify a column width on data type bit."
      
      The problem stems from the fact that the "java.sql.types.BIT" type is mapped as BIT[n] that really must be mapped as BIT.
      This concerns the type Boolean.
      
      As for the type String with maximum length of characters it must be mapped as VARCHAR (MAX) instead of TEXT which is a type deprecated in SQLServer.
      
      Here is the list of mappings for SQL Server:
      https://msdn.microsoft.com/en-us/library/ms378878(v=sql.110).aspx
      
      Closes #13944 from meknio/master.
      bf42c2db
    • Steve Loughran's avatar
      [SPARK-15844][CORE] HistoryServer doesn't come up if spark.authenticate = true · 586d1982
      Steve Loughran authored
      ## What changes were proposed in this pull request?
      
      During history server startup, the spark configuration is examined. If security.authentication is
      set, log at debug and set the value to false, so that {{SecurityManager}} can be created.
      
      ## How was this patch tested?
      
      A new test in `HistoryServerSuite` sets the `spark.authenticate` property to true, tries to create a security manager via a new package-private method `HistoryServer.createSecurityManager(SparkConf)`. This is the method used in `HistoryServer.main`. All other instantiations of a security manager in `HistoryServerSuite` have been switched to the new method, for consistency with the production code.
      
      Author: Steve Loughran <stevel@apache.org>
      
      Closes #13579 from steveloughran/history/SPARK-15844-security.
      586d1982
    • Bill Chambers's avatar
      [DOCS][MINOR] Clarify Where AccumulatorV2s are Displayed · 70ffff21
      Bill Chambers authored
      ## What changes were proposed in this pull request?
      
      This PR clarifies where accumulators will be displayed.
      
      ## How was this patch tested?
      
      No testing.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Bill Chambers <bill@databricks.com>
      Author: anabranch <wac.chambers@gmail.com>
      Author: Bill Chambers <wchambers@ischool.berkeley.edu>
      
      Closes #16180 from anabranch/improve-acc-docs.
      Unverified
      70ffff21
    • Tyson Condie's avatar
      [SPARK-18790][SS] Keep a general offset history of stream batches · 83a42897
      Tyson Condie authored
      ## What changes were proposed in this pull request?
      
      Instead of only keeping the minimum number of offsets around, we should keep enough information to allow us to roll back n batches and reexecute the stream starting from a given point. In particular, we should create a config in SQLConf, spark.sql.streaming.retainedBatches that defaults to 100 and ensure that we keep enough log files in the following places to roll back the specified number of batches:
      the offsets that are present in each batch
      versions of the state store
      the files lists stored for the FileStreamSource
      the metadata log stored by the FileStreamSink
      
      marmbrus zsxwing
      
      ## How was this patch tested?
      
      The following tests were added.
      
      ### StreamExecution offset metadata
      Test added to StreamingQuerySuite that ensures offset metadata is garbage collected according to minBatchesRetain
      
      ### CompactibleFileStreamLog
      Tests added in CompactibleFileStreamLogSuite to ensure that logs are purged starting before the first compaction file that proceeds the current batch id - minBatchesToRetain.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Tyson Condie <tcondie@gmail.com>
      
      Closes #16219 from tcondie/offset_hist.
      83a42897
  2. Dec 11, 2016
  3. Dec 10, 2016
    • wangzhenhua's avatar
      [SPARK-18815][SQL] Fix NPE when collecting column stats for string/binary... · a29ee55a
      wangzhenhua authored
      [SPARK-18815][SQL] Fix NPE when collecting column stats for string/binary column having only null values
      
      ## What changes were proposed in this pull request?
      
      During column stats collection, average and max length will be null if a column of string/binary type has only null values. To fix this, I use default size when avg/max length is null.
      
      ## How was this patch tested?
      
      Add a test for handling null columns
      
      Author: wangzhenhua <wangzhenhua@huawei.com>
      
      Closes #16243 from wzhfy/nullStats.
      a29ee55a
    • hyukjinkwon's avatar
      [SPARK-18803][TESTS] Fix JarEntry-related & path-related test failures and... · e094d011
      hyukjinkwon authored
      [SPARK-18803][TESTS] Fix JarEntry-related & path-related test failures and skip some tests by path length limitation on Windows
      
      ## What changes were proposed in this pull request?
      
      This PR proposes to fix some tests being failed on Windows as below for several problems.
      
      ### Incorrect path handling
      
      - FileSuite
        ```
        [info] - binary file input as byte array *** FAILED *** (500 milliseconds)
        [info]   "file:/C:/projects/spark/target/tmp/spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624/record-bytestream-00000.bin" did not contain "C:\projects\spark\target\tmp\spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624\record-bytestream-00000.bin" (FileSuite.scala:258)
        [info]   org.scalatest.exceptions.TestFailedException:
        [info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
        ...
        ```
        ```
        [info] - Get input files via old Hadoop API *** FAILED *** (1 second, 94 milliseconds)
        [info]   Set("/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-00000", "/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-00001") did not equal Set("C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-00000", "C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-00001") (FileSuite.scala:535)
        [info]   org.scalatest.exceptions.TestFailedException:
        [info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
        ...
        ```
      
        ```
        [info] - Get input files via new Hadoop API *** FAILED *** (313 milliseconds)
        [info]   Set("/C:/projects/spark/target/tmp/spark-12bc1540-1111-4df6-9c4d-79e0e614407c/output/part-00000", "/C:/projects/spark/target/tmp/spark-12bc1540-1111-4df6-9c4d-79e0e614407c/output/part-00001") did not equal Set("C:\projects\spark\target\tmp\spark-12bc1540-1111-4df6-9c4d-79e0e614407c\output/part-00000", "C:\projects\spark\target\tmp\spark-12bc1540-1111-4df6-9c4d-79e0e614407c\output/part-00001") (FileSuite.scala:549)
        [info]   org.scalatest.exceptions.TestFailedException:
        ...
        ```
      
      - TaskResultGetterSuite
      
        ```
        [info] - handling results larger than max RPC message size *** FAILED *** (1 second, 579 milliseconds)
        [info]   1 did not equal 0 Expect result to be removed from the block manager. (TaskResultGetterSuite.scala:129)
        [info]   org.scalatest.exceptions.TestFailedException:
        [info]   ...
        [info]   Cause: java.net.URISyntaxException: Illegal character in path at index 12: string:///C:\projects\spark\target\tmp\spark-93c485af-68da-440f-a907-aac7acd5fc25\repro\MyException.java
        [info]   at java.net.URI$Parser.fail(URI.java:2848)
        [info]   at java.net.URI$Parser.checkChars(URI.java:3021)
        ...
        ```
        ```
        [info] - failed task deserialized with the correct classloader (SPARK-11195) *** FAILED *** (0 milliseconds)
        [info]   java.lang.IllegalArgumentException: Illegal character in path at index 12: string:///C:\projects\spark\target\tmp\spark-93c485af-68da-440f-a907-aac7acd5fc25\repro\MyException.java
        [info]   at java.net.URI.create(URI.java:852)
        ...
        ```
      
      - SparkSubmitSuite
      
        ```
        [info]   java.lang.IllegalArgumentException: Illegal character in path at index 12: string:///C:\projects\spark\target\tmp\1481210831381-0\870903339\MyLib.java
        [info]   at java.net.URI.create(URI.java:852)
        [info]   at org.apache.spark.TestUtils$.org$apache$spark$TestUtils$$createURI(TestUtils.scala:112)
        ...
        ```
      
      ### Incorrect separate for JarEntry
      
      After the path fix from above, then `TaskResultGetterSuite` throws another exception as below:
      
      ```
      [info] - failed task deserialized with the correct classloader (SPARK-11195) *** FAILED *** (907 milliseconds)
      [info]   java.lang.ClassNotFoundException: repro.MyException
      [info]   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
      ...
      ```
      
      This is because `Paths.get` concatenates the given paths to an OS-specific path (Windows `\` and Linux `/`). However, for `JarEntry` we should comply ZIP specification meaning it should be always `/` according to ZIP specification.
      
      See `4.4.17 file name: (Variable)` in https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
      
      ### Long path problem on Windows
      
      Some tests in `ShuffleSuite` via `ShuffleNettySuite` were skipped due to the same reason with SPARK-18718
      
      ## How was this patch tested?
      
      Manually via AppVeyor.
      
      **Before**
      
      - `FileSuite`, `TaskResultGetterSuite`,`SparkSubmitSuite`
        https://ci.appveyor.com/project/spark-test/spark/build/164-tmp-windows-base (please grep each to check each)
      - `ShuffleSuite`
        https://ci.appveyor.com/project/spark-test/spark/build/157-tmp-windows-base
      
      **After**
      
      - `FileSuite`
        https://ci.appveyor.com/project/spark-test/spark/build/166-FileSuite
      - `TaskResultGetterSuite`
        https://ci.appveyor.com/project/spark-test/spark/build/173-TaskResultGetterSuite
      - `SparkSubmitSuite`
        https://ci.appveyor.com/project/spark-test/spark/build/167-SparkSubmitSuite
      - `ShuffleSuite`
        https://ci.appveyor.com/project/spark-test/spark/build/176-ShuffleSuite
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #16234 from HyukjinKwon/test-errors-windows.
      Unverified
      e094d011
    • Michal Senkyr's avatar
      [SPARK-3359][DOCS] Fix greater-than symbols in Javadoc to allow building with Java 8 · 11432483
      Michal Senkyr authored
      ## What changes were proposed in this pull request?
      
      The API documentation build was failing when using Java 8 due to incorrect character `>` in Javadoc.
      
      Replace `>` with literals in Javadoc to allow the build to pass.
      
      ## How was this patch tested?
      
      Documentation was built and inspected manually to ensure it still displays correctly in the browser
      
      ```
      cd docs && jekyll serve
      ```
      
      Author: Michal Senkyr <mike.senkyr@gmail.com>
      
      Closes #16201 from michalsenkyr/javadoc8-gt-fix.
      Unverified
      11432483
    • gatorsmile's avatar
      [SPARK-18766][SQL] Push Down Filter Through BatchEvalPython (Python UDF) · 422a45cf
      gatorsmile authored
      ### What changes were proposed in this pull request?
      Currently, when users use Python UDF in Filter, BatchEvalPython is always generated below FilterExec. However, not all the predicates need to be evaluated after Python UDF execution. Thus, this PR is to push down the determinisitc predicates through `BatchEvalPython`.
      ```Python
      >>> df = spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], ["key", "value"])
      >>> from pyspark.sql.functions import udf, col
      >>> from pyspark.sql.types import BooleanType
      >>> my_filter = udf(lambda a: a < 2, BooleanType())
      >>> sel = df.select(col("key"), col("value")).filter((my_filter(col("key"))) & (df.value < "2"))
      >>> sel.explain(True)
      ```
      Before the fix, the plan looks like
      ```
      == Optimized Logical Plan ==
      Filter ((isnotnull(value#1) && <lambda>(key#0L)) && (value#1 < 2))
      +- LogicalRDD [key#0L, value#1]
      
      == Physical Plan ==
      *Project [key#0L, value#1]
      +- *Filter ((isnotnull(value#1) && pythonUDF0#9) && (value#1 < 2))
         +- BatchEvalPython [<lambda>(key#0L)], [key#0L, value#1, pythonUDF0#9]
            +- Scan ExistingRDD[key#0L,value#1]
      ```
      
      After the fix, the plan looks like
      ```
      == Optimized Logical Plan ==
      Filter ((isnotnull(value#1) && <lambda>(key#0L)) && (value#1 < 2))
      +- LogicalRDD [key#0L, value#1]
      
      == Physical Plan ==
      *Project [key#0L, value#1]
      +- *Filter pythonUDF0#9: boolean
         +- BatchEvalPython [<lambda>(key#0L)], [key#0L, value#1, pythonUDF0#9]
            +- *Filter (isnotnull(value#1) && (value#1 < 2))
               +- Scan ExistingRDD[key#0L,value#1]
      ```
      
      ### How was this patch tested?
      Added both unit test cases for `BatchEvalPythonExec` and also add an end-to-end test case in Python test suite.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #16193 from gatorsmile/pythonUDFPredicatePushDown.
      422a45cf
    • WangTaoTheTonic's avatar
      [SPARK-18606][HISTORYSERVER] remove useless elements while searching · 3a3e65ad
      WangTaoTheTonic authored
      ## What changes were proposed in this pull request?
      
      When we search applications in HistoryServer, it will include all contents between <td> tag, which including useless elemtns like "<span title...", "a href" and making results confused.
      We should remove those to make it clear.
      
      ## How was this patch tested?
      
      manual tests.
      
      Before:
      ![before](https://cloud.githubusercontent.com/assets/5276001/20662840/28bcc874-b590-11e6-9115-12fb64e49898.jpg)
      
      After:
      ![after](https://cloud.githubusercontent.com/assets/5276001/20662844/2f717af2-b590-11e6-97dc-a48b08a54247.jpg)
      
      Author: WangTaoTheTonic <wangtao111@huawei.com>
      
      Closes #16031 from WangTaoTheTonic/span.
      Unverified
      3a3e65ad
    • Dongjoon Hyun's avatar
      [MINOR][DOCS] Remove Apache Spark Wiki address · f3a3fed7
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      According to the notice of the following Wiki front page, we can remove the obsolete wiki pointer safely in `README.md` and `docs/index.md`, too. These two lines are the last occurrence of that links.
      
      ```
      All current wiki content has been merged into pages at http://spark.apache.org as of November 2016.
      Each page links to the new location of its information on the Spark web site.
      Obsolete wiki content is still hosted here, but carries a notice that it is no longer current.
      ```
      
      ## How was this patch tested?
      
      Manual.
      
      - `README.md`: https://github.com/dongjoon-hyun/spark/tree/remove_wiki_from_readme
      - `docs/index.md`:
      ```
      cd docs
      SKIP_API=1 jekyll build
      ```
      ![screen shot 2016-12-09 at 2 53 29 pm](https://cloud.githubusercontent.com/assets/9700541/21067323/517252e2-be1f-11e6-85b1-2a4471131c5d.png)
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #16239 from dongjoon-hyun/remove_wiki_from_readme.
      Unverified
      f3a3fed7
    • Huaxin Gao's avatar
      [SPARK-17460][SQL] Make sure sizeInBytes in Statistics will not overflow · c5172568
      Huaxin Gao authored
      ## What changes were proposed in this pull request?
      
      1. In SparkStrategies.canBroadcast, I will add the check   plan.statistics.sizeInBytes >= 0
      2. In LocalRelations.statistics, when calculate the statistics, I will change the size to BigInt so it won't overflow.
      
      ## How was this patch tested?
      
      I will add a test case to make sure the statistics.sizeInBytes won't overflow.
      
      Author: Huaxin Gao <huaxing@us.ibm.com>
      
      Closes #16175 from huaxingao/spark-17460.
      c5172568
    • Burak Yavuz's avatar
      [SPARK-18811] StreamSource resolution should happen in stream execution thread · 63c91598
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      When you start a stream, if we are trying to resolve the source of the stream, for example if we need to resolve partition columns, this could take a long time. This long execution time should not block the main thread where `query.start()` was called on. It should happen in the stream execution thread possibly before starting any triggers.
      
      ## How was this patch tested?
      
      Unit test added. Made sure test fails with no code changes.
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #16238 from brkyvz/SPARK-18811.
      63c91598
  4. Dec 09, 2016
    • Felix Cheung's avatar
      [SPARK-18807][SPARKR] Should suppress output print for calls to JVM methods with void return values · 3e11d5bf
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      Several SparkR API calling into JVM methods that have void return values are getting printed out, especially when running in a REPL or IDE.
      example:
      ```
      > setLogLevel("WARN")
      NULL
      ```
      We should fix this to make the result more clear.
      
      Also found a small change to return value of dropTempView in 2.1 - adding doc and test for it.
      
      ## How was this patch tested?
      
      manually - I didn't find a expect_*() method in testthat for this
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #16237 from felixcheung/rinvis.
      3e11d5bf
    • Xiangrui Meng's avatar
      [SPARK-18812][MLLIB] explain "Spark ML" · d2493a20
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      There has been some confusion around "Spark ML" vs. "MLlib". This PR adds some FAQ-like entries to the MLlib user guide to explain "Spark ML" and reduce the confusion.
      
      I check the [Spark FAQ page](http://spark.apache.org/faq.html), which seems too high-level for the content here. So I added it to the MLlib user guide instead.
      
      cc: mateiz
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #16241 from mengxr/SPARK-18812.
      d2493a20
    • Davies Liu's avatar
      [SPARK-4105] retry the fetch or stage if shuffle block is corrupt · cf33a862
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      There is an outstanding issue that existed for a long time: Sometimes the shuffle blocks are corrupt and can't be decompressed. We recently hit this in three different workloads, sometimes we can reproduce it by every try, sometimes can't. I also found that when the corruption happened, the beginning and end of the blocks are correct, the corruption happen in the middle. There was one case that the string of block id is corrupt by one character. It seems that it's very likely the corruption is introduced by some weird machine/hardware, also the checksum (16 bits) in TCP is not strong enough to identify all the corruption.
      
      Unfortunately, Spark does not have checksum for shuffle blocks or broadcast, the job will fail if any corruption happen in the shuffle block from disk, or broadcast blocks during network. This PR try to detect the corruption after fetching shuffle blocks by decompressing them, because most of the compression already have checksum in them. It will retry the block, or failed with FetchFailure, so the previous stage could be retried on different (still random) machines.
      
      Checksum for broadcast will be added by another PR.
      
      ## How was this patch tested?
      
      Added unit tests
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15923 from davies/detect_corrupt.
      cf33a862
    • Kazuaki Ishizaki's avatar
      [SPARK-18745][SQL] Fix signed integer overflow due to toInt cast · d60ab5fd
      Kazuaki Ishizaki authored
      ## What changes were proposed in this pull request?
      
      This PR avoids that a result of a cast `toInt` is negative due to signed integer overflow (e.g. 0x0000_0000_1???????L.toInt < 0 ). This PR performs casts after we can ensure the value is within range of signed integer (the result of `max(array.length, ???)` is always integer).
      
      ## How was this patch tested?
      
      Manually executed query68 of TPC-DS with 100TB
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #16235 from kiszk/SPARK-18745.
      d60ab5fd
    • Takeshi YAMAMURO's avatar
      [SPARK-18620][STREAMING][KINESIS] Flatten input rates in timeline for streaming + kinesis · b08b5004
      Takeshi YAMAMURO authored
      ## What changes were proposed in this pull request?
      This pr is to make input rates in timeline more flat for spark streaming + kinesis.
      Since kinesis workers fetch records and push them into block generators in bulk, timeline in web UI has many spikes when `maxRates` applied (See a Figure.1 below). This fix splits fetched input records into multiple `adRecords` calls.
      
      Figure.1 Apply `maxRates=500` in vanilla Spark
      <img width="1084" alt="apply_limit in_vanilla_spark" src="https://cloud.githubusercontent.com/assets/692303/20823861/4602f300-b89b-11e6-95f3-164a37061305.png">
      
      Figure.2 Apply `maxRates=500` in Spark with my patch
      <img width="1056" alt="apply_limit in_spark_with_my_patch" src="https://cloud.githubusercontent.com/assets/692303/20823882/6c46352c-b89b-11e6-81ab-afd8abfe0cfe.png">
      
      ## How was this patch tested?
      Add tests to check to split input records into multiple `addRecords` calls.
      
      Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>
      
      Closes #16114 from maropu/SPARK-18620.
      Unverified
      b08b5004
    • Shivaram Venkataraman's avatar
      [MINOR][SPARKR] Fix SparkR regex in copy command · be5fc6ef
      Shivaram Venkataraman authored
      Fix SparkR package copy regex. The existing code leads to
      ```
      Copying release tarballs to /home/****/public_html/spark-nightly/spark-branch-2.1-bin/spark-2.1.1-SNAPSHOT-2016_12_08_22_38-e8f351f9-bin
      mput: SparkR-*: no files found
      ```
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #16231 from shivaram/typo-sparkr-build.
      be5fc6ef
    • Xiangrui Meng's avatar
      [SPARK-17822][R] Make JVMObjectTracker a member variable of RBackend · fd48d80a
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      * This PR changes `JVMObjectTracker` from `object` to `class` and let its instance associated with each RBackend. So we can manage the lifecycle of JVM objects when there are multiple `RBackend` sessions. `RBackend.close` will clear the object tracker explicitly.
      * I assume that `SQLUtils` and `RRunner` do not need to track JVM instances, which could be wrong.
      * Small refactor of `SerDe.sqlSerDe` to increase readability.
      
      ## How was this patch tested?
      
      * Added unit tests for `JVMObjectTracker`.
      * Wait for Jenkins to run full tests.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #16154 from mengxr/SPARK-17822.
      fd48d80a
    • Jacek Laskowski's avatar
      [MINOR][CORE][SQL][DOCS] Typo fixes · b162cc0c
      Jacek Laskowski authored
      ## What changes were proposed in this pull request?
      
      Typo fixes
      
      ## How was this patch tested?
      
      Local build. Awaiting the official build.
      
      Author: Jacek Laskowski <jacek@japila.pl>
      
      Closes #16144 from jaceklaskowski/typo-fixes.
      Unverified
      b162cc0c
    • Zhan Zhang's avatar
      [SPARK-18637][SQL] Stateful UDF should be considered as nondeterministic · 67587d96
      Zhan Zhang authored
      ## What changes were proposed in this pull request?
      
      Make stateful udf as nondeterministic
      
      ## How was this patch tested?
      Add new test cases with both Stateful and Stateless UDF.
      Without the patch, the test cases will throw exception:
      
      1 did not equal 10
      ScalaTestFailureLocation: org.apache.spark.sql.hive.execution.HiveUDFSuite$$anonfun$21 at (HiveUDFSuite.scala:501)
      org.scalatest.exceptions.TestFailedException: 1 did not equal 10
              at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
              at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
              ...
      
      Author: Zhan Zhang <zhanzhang@fb.com>
      
      Closes #16068 from zhzhan/state.
      67587d96
    • Felix Cheung's avatar
      Copy pyspark and SparkR packages to latest release dir too · c074c96d
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      Copy pyspark and SparkR packages to latest release dir, as per comment [here](https://github.com/apache/spark/pull/16226#discussion_r91664822)
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #16227 from felixcheung/pyrftp.
      c074c96d
    • Shivaram Venkataraman's avatar
      Copy the SparkR source package with LFTP · 934035ae
      Shivaram Venkataraman authored
      This PR adds a line in release-build.sh to copy the SparkR source archive using LFTP
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #16226 from shivaram/fix-sparkr-copy-build.
      934035ae
    • Weiqing Yang's avatar
      [SPARK-18697][BUILD] Upgrade sbt plugins · 9338aa4f
      Weiqing Yang authored
      ## What changes were proposed in this pull request?
      
      This PR is to upgrade sbt plugins. The following sbt plugins will be upgraded:
      ```
      sbteclipse-plugin: 4.0.0 -> 5.0.1
      sbt-mima-plugin: 0.1.11 -> 0.1.12
      org.ow2.asm/asm: 5.0.3 -> 5.1
      org.ow2.asm/asm-commons: 5.0.3 -> 5.1
      ```
      ## How was this patch tested?
      Pass the Jenkins build.
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #16223 from weiqingy/SPARK_18697.
      Unverified
      9338aa4f
    • wm624@hotmail.com's avatar
      [SPARK-18349][SPARKR] Update R API documentation on ml model summary · 86a96034
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      In this PR, the document of `summary` method is improved in the format:
      
      returns summary information of the fitted model, which is a list. The list includes .......
      
      Since `summary` in R is mainly about the model, which is not the same as `summary` object on scala side, if there is one, the scala API doc is not pointed here.
      
      In current document, some `return` have `.` and some don't have. `.` is added to missed ones.
      
      Since spark.logit `summary` has a big refactoring, this PR doesn't include this one. It will be changed when the `spark.logit` PR is merged.
      
      ## How was this patch tested?
      
      Manual build.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #16150 from wangmiao1981/audit2.
      86a96034
  5. Dec 08, 2016
    • Shivaram Venkataraman's avatar
      [SPARKR][PYSPARK] Fix R source package name to match Spark version. Remove pip... · 4ac8b20b
      Shivaram Venkataraman authored
      [SPARKR][PYSPARK] Fix R source package name to match Spark version. Remove pip tar.gz from distribution
      
      ## What changes were proposed in this pull request?
      
      Fixes name of R source package so that the `cp` in release-build.sh works correctly.
      
      Issue discussed in https://github.com/apache/spark/pull/16014#issuecomment-265867125
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #16221 from shivaram/fix-sparkr-release-build-name.
      4ac8b20b
    • Tathagata Das's avatar
      [SPARK-18776][SS] Make Offset for FileStreamSource corrected formatted in json · 458fa332
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      - Changed FileStreamSource to use new FileStreamSourceOffset rather than LongOffset. The field is named as `logOffset` to make it more clear that this is a offset in the file stream log.
      - Fixed bug in FileStreamSourceLog, the field endId in the FileStreamSourceLog.get(startId, endId) was not being used at all. No test caught it earlier. Only my updated tests caught it.
      
      Other minor changes
      - Dont use batchId in the FileStreamSource, as calling it batch id is extremely miss leading. With multiple sources, it may happen that a new batch has no new data from a file source. So offset of FileStreamSource != batchId after that batch.
      
      ## How was this patch tested?
      
      Updated unit test.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #16205 from tdas/SPARK-18776.
      458fa332
    • Shivaram Venkataraman's avatar
      [SPARK-18590][SPARKR] Change the R source build to Hadoop 2.6 · 202fcd21
      Shivaram Venkataraman authored
      This PR changes the SparkR source release tarball to be built using the Hadoop 2.6 profile. Previously it was using the without hadoop profile which leads to an error as discussed in https://github.com/apache/spark/pull/16014#issuecomment-265843991
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #16218 from shivaram/fix-sparkr-release-build.
      202fcd21
    • Reynold Xin's avatar
      Close stale PRs. · 3261e25d
      Reynold Xin authored
      Closes #16191
      Closes #16198
      Closes #14561
      Closes #14223
      Closes #7739
      Closes #13026
      Closes #16217
      3261e25d
    • Reynold Xin's avatar
      [SPARK-18760][SQL] Consistent format specification for FileFormats · 5f894d23
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch fixes the format specification in explain for file sources (Parquet and Text formats are the only two that are different from the rest):
      
      Before:
      ```
      scala> spark.read.text("test.text").explain()
      == Physical Plan ==
      *FileScan text [value#15] Batched: false, Format: org.apache.spark.sql.execution.datasources.text.TextFileFormatxyz, Location: InMemoryFileIndex[file:/scratch/rxin/spark/test.text], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
      ```
      
      After:
      ```
      scala> spark.read.text("test.text").explain()
      == Physical Plan ==
      *FileScan text [value#15] Batched: false, Format: Text, Location: InMemoryFileIndex[file:/scratch/rxin/spark/test.text], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
      ```
      
      Also closes #14680.
      
      ## How was this patch tested?
      Verified in spark-shell.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #16187 from rxin/SPARK-18760.
      5f894d23
    • Shixiong Zhu's avatar
      [SPARK-18751][CORE] Fix deadlock when SparkContext.stop is called in Utils.tryOrStopSparkContext · 26432df9
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      When `SparkContext.stop` is called in `Utils.tryOrStopSparkContext` (the following three places), it will cause deadlock because the `stop` method needs to wait for the thread running `stop` to exit.
      
      - ContextCleaner.keepCleaning
      - LiveListenerBus.listenerThread.run
      - TaskSchedulerImpl.start
      
      This PR adds `SparkContext.stopInNewThread` and uses it to eliminate the potential deadlock. I also removed my changes in #15775 since they are not necessary now.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #16178 from zsxwing/fix-stop-deadlock.
      26432df9
    • Felix Cheung's avatar
      [SPARK-18590][SPARKR] build R source package when making distribution · c3d3a9d0
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      This PR has 2 key changes. One, we are building source package (aka bundle package) for SparkR which could be released on CRAN. Two, we should include in the official Spark binary distributions SparkR installed from this source package instead (which would have help/vignettes rds needed for those to work when the SparkR package is loaded in R, whereas earlier approach with devtools does not)
      
      But, because of various differences in how R performs different tasks, this PR is a fair bit more complicated. More details below.
      
      This PR also includes a few minor fixes.
      
      ### more details
      
      These are the additional steps in make-distribution; please see [here](https://github.com/apache/spark/blob/master/R/CRAN_RELEASE.md) on what's going to a CRAN release, which is now run during make-distribution.sh.
      1. package needs to be installed because the first code block in vignettes is `library(SparkR)` without lib path
      2. `R CMD build` will build vignettes (this process runs Spark/SparkR code and captures outputs into pdf documentation)
      3. `R CMD check` on the source package will install package and build vignettes again (this time from source packaged) - this is a key step required to release R package on CRAN
       (will skip tests here but tests will need to pass for CRAN release process to success - ideally, during release signoff we should install from the R source package and run tests)
      4. `R CMD Install` on the source package (this is the only way to generate doc/vignettes rds files correctly, not in step # 1)
       (the output of this step is what we package into Spark dist and sparkr.zip)
      
      Alternatively,
         R CMD build should already be installing the package in a temp directory though it might just be finding this location and set it to lib.loc parameter; another approach is perhaps we could try calling `R CMD INSTALL --build pkg` instead.
       But in any case, despite installing the package multiple times this is relatively fast.
      Building vignettes takes a while though.
      
      ## How was this patch tested?
      
      Manually, CI.
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #16014 from felixcheung/rdist.
      c3d3a9d0
    • Andrew Ray's avatar
      [SPARK-16589] [PYTHON] Chained cartesian produces incorrect number of records · 3c68944b
      Andrew Ray authored
      ## What changes were proposed in this pull request?
      
      Fixes a bug in the python implementation of rdd cartesian product related to batching that showed up in repeated cartesian products with seemingly random results. The root cause being multiple iterators pulling from the same stream in the wrong order because of logic that ignored batching.
      
      `CartesianDeserializer` and `PairDeserializer` were changed to implement `_load_stream_without_unbatching` and borrow the one line implementation of `load_stream` from `BatchedSerializer`. The default implementation of `_load_stream_without_unbatching` was changed to give consistent results (always an iterable) so that it could be used without additional checks.
      
      `PairDeserializer` no longer extends `CartesianDeserializer` as it was not really proper. If wanted a new common super class could be added.
      
      Both `CartesianDeserializer` and `PairDeserializer` now only extend `Serializer` (which has no `dump_stream` implementation) since they are only meant for *de*serialization.
      
      ## How was this patch tested?
      
      Additional unit tests (sourced from #14248) plus one for testing a cartesian with zip.
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #16121 from aray/fix-cartesian.
      3c68944b
Loading