Skip to content
Snippets Groups Projects
  1. May 04, 2016
    • Cheng Lian's avatar
      [SPARK-14127][SQL] Native "DESC [EXTENDED | FORMATTED] <table>" DDL command · f152fae3
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      This PR implements native `DESC [EXTENDED | FORMATTED] <table>` DDL command. Sample output:
      
      ```
      scala> spark.sql("desc extended src").show(100, truncate = false)
      +----------------------------+---------------------------------+-------+
      |col_name                    |data_type                        |comment|
      +----------------------------+---------------------------------+-------+
      |key                         |int                              |       |
      |value                       |string                           |       |
      |                            |                                 |       |
      |# Detailed Table Information|CatalogTable(`default`.`src`, ...|       |
      +----------------------------+---------------------------------+-------+
      
      scala> spark.sql("desc formatted src").show(100, truncate = false)
      +----------------------------+----------------------------------------------------------+-------+
      |col_name                    |data_type                                                 |comment|
      +----------------------------+----------------------------------------------------------+-------+
      |key                         |int                                                       |       |
      |value                       |string                                                    |       |
      |                            |                                                          |       |
      |# Detailed Table Information|                                                          |       |
      |Database:                   |default                                                   |       |
      |Owner:                      |lian                                                      |       |
      |Create Time:                |Mon Jan 04 17:06:00 CST 2016                              |       |
      |Last Access Time:           |Thu Jan 01 08:00:00 CST 1970                              |       |
      |Location:                   |hdfs://localhost:9000/user/hive/warehouse_hive121/src     |       |
      |Table Type:                 |MANAGED                                                   |       |
      |Table Parameters:           |                                                          |       |
      |  transient_lastDdlTime     |1451898360                                                |       |
      |                            |                                                          |       |
      |# Storage Information       |                                                          |       |
      |SerDe Library:              |org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe        |       |
      |InputFormat:                |org.apache.hadoop.mapred.TextInputFormat                  |       |
      |OutputFormat:               |org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat|       |
      |Num Buckets:                |-1                                                        |       |
      |Bucket Columns:             |[]                                                        |       |
      |Sort Columns:               |[]                                                        |       |
      |Storage Desc Parameters:    |                                                          |       |
      |  serialization.format      |1                                                         |       |
      +----------------------------+----------------------------------------------------------+-------+
      ```
      
      ## How was this patch tested?
      
      A test case is added to `HiveDDLSuite` to check command output.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #12844 from liancheng/spark-14127-desc-table.
      f152fae3
    • Wenchen Fan's avatar
      [SPARK-15029] improve error message for Generate · 6c12e801
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      This PR improve the error message for `Generate` in 3 cases:
      
      1. generator is nested in expressions, e.g. `SELECT explode(list) + 1 FROM tbl`
      2. generator appears more than one time in SELECT, e.g. `SELECT explode(list), explode(list) FROM tbl`
      3. generator appears in other operator which is not project, e.g. `SELECT * FROM tbl SORT BY explode(list)`
      
      ## How was this patch tested?
      
      new tests in `AnalysisErrorSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12810 from cloud-fan/bug.
      6c12e801
    • Cheng Lian's avatar
      [SPARK-14237][SQL] De-duplicate partition value appending logic in various... · bc3760d4
      Cheng Lian authored
      [SPARK-14237][SQL] De-duplicate partition value appending logic in various buildReader() implementations
      
      ## What changes were proposed in this pull request?
      
      Currently, various `FileFormat` data sources share approximately the same code for partition value appending. This PR tries to eliminate this duplication.
      
      A new method `buildReaderWithPartitionValues()` is added to `FileFormat` with a default implementation that appends partition values to `InternalRow`s produced by the reader function returned by `buildReader()`.
      
      Special data sources like Parquet, which implements partition value appending inside `buildReader()` because of the vectorized reader, and the Text data source, which doesn't support partitioning, override `buildReaderWithPartitionValues()` and simply delegate to `buildReader()`.
      
      This PR brings two benefits:
      
      1. Apparently, it de-duplicates partition value appending logic
      
      2. Now the reader function returned by `buildReader()` is only required to produce `InternalRow`s rather than `UnsafeRow`s if the data source doesn't override `buildReaderWithPartitionValues()`.
      
         Because the safe-to-unsafe conversion is also performed while appending partition values. This makes 3rd-party data sources (e.g. spark-avro) easier to implement since they no longer need to access private APIs involving `UnsafeRow`.
      
      ## How was this patch tested?
      
      Existing tests should do the work.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #12866 from liancheng/spark-14237-simplify-partition-values-appending.
      bc3760d4
    • Reynold Xin's avatar
      [SPARK-15107][SQL] Allow varying # iterations by test case in Benchmark · 695f0e91
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch changes our micro-benchmark util to allow setting different iteration numbers for different test cases. For some of our benchmarks, turning off whole-stage codegen can make the runtime 20X slower, making it very difficult to run a large number of times without substantially shortening the input cardinality.
      
      With this change, I set the default num iterations to 2 for whole stage codegen off, and 5 for whole stage codegen on. I also updated some results.
      
      ## How was this patch tested?
      N/A - this is a test util.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12884 from rxin/SPARK-15107.
      695f0e91
  2. May 03, 2016
    • Davies Liu's avatar
      [SPARK-15095][SQL] remove HiveSessionHook from ThriftServer · 348c1389
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Remove HiveSessionHook
      
      ## How was this patch tested?
      
      No tests needed.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12881 from davies/remove_hooks.
      348c1389
    • Andrew Or's avatar
      [SPARK-14414][SQL] Make DDL exceptions more consistent · 6ba17cd1
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      Just a bunch of small tweaks on DDL exception messages.
      
      ## How was this patch tested?
      
      `DDLCommandSuite` et al.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #12853 from andrewor14/make-exceptions-consistent.
      6ba17cd1
    • Koert Kuipers's avatar
      [SPARK-15097][SQL] make Dataset.sqlContext a stable identifier for imports · 9e4928b7
      Koert Kuipers authored
      ## What changes were proposed in this pull request?
      Make Dataset.sqlContext a lazy val so that its a stable identifier and can be used for imports.
      Now this works again:
      import someDataset.sqlContext.implicits._
      
      ## How was this patch tested?
      Add unit test to DatasetSuite that uses the import show above.
      
      Author: Koert Kuipers <koert@tresata.com>
      
      Closes #12877 from koertkuipers/feat-sqlcontext-stable-import.
      9e4928b7
    • Dongjoon Hyun's avatar
      [SPARK-15084][PYTHON][SQL] Use builder pattern to create SparkSession in PySpark. · 0903a185
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This is a python port of corresponding Scala builder pattern code. `sql.py` is modified as a target example case.
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12860 from dongjoon-hyun/SPARK-15084.
      0903a185
    • Timothy Chen's avatar
      [SPARK-14645][MESOS] Fix python running on cluster mode mesos to have non local uris · c1839c99
      Timothy Chen authored
      ## What changes were proposed in this pull request?
      
      Fix SparkSubmit to allow non-local python uris
      
      ## How was this patch tested?
      
      Manually tested with mesos-spark-dispatcher
      
      Author: Timothy Chen <tnachen@gmail.com>
      
      Closes #12403 from tnachen/enable_remote_python.
      c1839c99
    • Sandeep Singh's avatar
      [SPARK-14422][SQL] Improve handling of optional configs in SQLConf · a8d56f53
      Sandeep Singh authored
      ## What changes were proposed in this pull request?
      Create a new API for handling Optional Configs in SQLConf.
      Right now `getConf` for `OptionalConfigEntry[T]` returns value of type `T`, if doesn't exist throws an exception. Add new method `getOptionalConf`(suggestions on naming) which will now returns value of type `Option[T]`(so if doesn't exist it returns `None`).
      
      ## How was this patch tested?
      Add test and ran tests locally.
      
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #12846 from techaddict/SPARK-14422.
      a8d56f53
    • Shuai Lin's avatar
      [MINOR][DOC] Fixed some python snippets in mllib data types documentation. · c4e0fde8
      Shuai Lin authored
      ## What changes were proposed in this pull request?
      
      Some python snippets is using scala imports and comments.
      
      ## How was this patch tested?
      
      Generated the docs locally with `SKIP_API=1 jekyll build` and viewed the changes in the browser.
      
      Author: Shuai Lin <linshuai2012@gmail.com>
      
      Closes #12869 from lins05/fix-mllib-python-snippets.
      c4e0fde8
    • Andrew Ash's avatar
      [SPARK-15104] Fix spacing in log line · dbacd999
      Andrew Ash authored
      Otherwise get logs that look like this (note no space before NODE_LOCAL)
      
      ```
      INFO  [2016-05-03 21:18:51,477] org.apache.spark.scheduler.TaskSetManager: Starting task 0.0 in stage 101.0 (TID 7029, localhost, partition 0,NODE_LOCAL, 1894 bytes)
      ```
      
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #12880 from ash211/patch-7.
      dbacd999
    • Davies Liu's avatar
      [SQL-15102][SQL] remove delegation token support from ThriftServer · 028c6a5d
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      These API is only useful for Hadoop, may not work for Spark SQL.
      
      The APIs is kept for source compatibility.
      
      ## How was this patch tested?
      
      No unit tests needed.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12878 from davies/remove_delegate.
      028c6a5d
    • gatorsmile's avatar
      [SPARK-15056][SQL] Parse Unsupported Sampling Syntax and Issue Better Exceptions · 71296c04
      gatorsmile authored
      #### What changes were proposed in this pull request?
      Compared with the current Spark parser, there are two extra syntax are supported in Hive for sampling
      - In `On` clauses, `rand()` is used for indicating sampling on the entire row instead of an individual column. For example,
      
         ```SQL
         SELECT * FROM source TABLESAMPLE(BUCKET 3 OUT OF 32 ON rand()) s;
         ```
      - Users can specify the total length to be read. For example,
      
         ```SQL
         SELECT * FROM source TABLESAMPLE(100M) s;
         ```
      
      Below is the link for references:
         https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Sampling
      
      This PR is to parse and capture these two extra syntax, and issue a better error message.
      
      #### How was this patch tested?
      Added test cases to verify the thrown exceptions
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #12838 from gatorsmile/bucketOnRand.
      71296c04
    • yinxusen's avatar
      [SPARK-14973][ML] The CrossValidator and TrainValidationSplit miss the seed when saving and loading · 2e2a6211
      yinxusen authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-14973
      
      Add seed support when saving/loading of CrossValidator and TrainValidationSplit.
      
      ## How was this patch tested?
      
      Spark unit test.
      
      Author: yinxusen <yinxusen@gmail.com>
      
      Closes #12825 from yinxusen/SPARK-14973.
      2e2a6211
    • Davies Liu's avatar
      [SPARK-15095][SQL] drop binary mode in ThriftServer · d6c7b2a5
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      This PR drop the support for binary mode in ThriftServer, only HTTP mode is supported now, to reduce the maintain burden.
      
      The code to support binary mode is still kept, just in case if we want it  in future.
      
      ## How was this patch tested?
      
      Updated tests to use HTTP mode.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12876 from davies/hide_binary.
      d6c7b2a5
    • Andrew Or's avatar
      [SPARK-15073][SQL] Hide SparkSession constructor from the public · 588cac41
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      Users should use the builder pattern instead.
      
      ## How was this patch tested?
      
      Jenks.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #12873 from andrewor14/spark-session-constructor.
      588cac41
    • Thomas Graves's avatar
      [SPARK-11316] coalesce doesn't handle UnionRDD with partial locality properly · 83ee92f6
      Thomas Graves authored
      ## What changes were proposed in this pull request?
      
      coalesce doesn't handle UnionRDD with partial locality properly.  I had a user who had a UnionRDD that was made up of mapPartitionRDD without preferred locations and a checkpointedRDD with preferred locations (getting from hdfs).  It took the driver over 20 minutes to setup the groups and put the partitions into those groups before it even started any tasks.  Even perhaps worse is it didn't end up with the number of partitions he was asking for because it didn't put a partition in each of the groups properly.
      
      The changes in this patch get rid of a n^2 while loop that was causing the 20 minutes, it properly distributes the partitions to have at least one per group, and it changes from using the rotation iterator which got the preferred locations many times to get all the preferred locations once up front.
      
      Note that the n^2 while loop that I removed in setupGroups took so long because all of the partitions with preferred locations were already assigned to group, so it basically looped through every single one and wasn't ever able to assign it.  At the time I had 960 partitions with preferred locations and 1020 without and did the outer while loop 319 times because that is the # of groups left to create.  Note that each of those times through the inner while loop is going off to hdfs to get the block locations, so this is extremely inefficient.
      
      ## How was the this patch tested?
      
      Added unit tests for this case and ran existing ones that applied to make sure no regressions.
      Also manually tested on the users production job to make sure it fixed their issue.  It created the proper number of partitions and now it takes about 6 seconds rather then 20 minutes.
       I did also run some basic manual tests with spark-shell doing coalesced to smaller number, same number, and then greater with shuffle.
      
      Author: Thomas Graves <tgraves@prevailsail.corp.gq1.yahoo.com>
      
      Closes #11327 from tgravescs/SPARK-11316.
      83ee92f6
    • yzhou2001's avatar
      [SPARK-14521] [SQL] StackOverflowError in Kryo when executing TPC-DS · a4aed717
      yzhou2001 authored
      ## What changes were proposed in this pull request?
      
      Observed stackOverflowError in Kryo when executing TPC-DS Query27. Spark thrift server disables kryo reference tracking (if not specified in conf). When "spark.kryo.referenceTracking" is set to true explicitly in spark-defaults.conf, query executes successfully. The root cause is that the TaskMemoryManager inside MemoryConsumer and LongToUnsafeRowMap were not transient and thus were serialized and broadcast around from within LongHashedRelation, which could potentially cause circular reference inside Kryo. But the TaskMemoryManager is per task and should not be passed around at the first place. This fix makes it transient.
      
      ## How was this patch tested?
      core/test, hive/test, sql/test, catalyst/test, dev/lint-scala, org.apache.spark.sql.hive.execution.HiveCompatibilitySuite, dev/scalastyle,
      manual test of TBC-DS Query 27 with 1GB data but without the "limit 100" which would cause a NPE due to SPARK-14752.
      
      Author: yzhou2001 <yzhou_1999@yahoo.com>
      
      Closes #12598 from yzhou2001/master.
      a4aed717
    • Devaraj K's avatar
      [SPARK-14234][CORE] Executor crashes for TaskRunner thread interruption · 659f635d
      Devaraj K authored
      ## What changes were proposed in this pull request?
      Resetting the task interruption status before updating the task status.
      
      ## How was this patch tested?
      I have verified it manually by running multiple applications, Executor doesn't crash and updates the status to the driver without any exceptions with the patch changes.
      
      Author: Devaraj K <devaraj@apache.org>
      
      Closes #12031 from devaraj-kavali/SPARK-14234.
      659f635d
    • Zheng Tan's avatar
      [SPARK-15059][CORE] Remove fine-grained lock in ChildFirstURLClassLoader to avoid dead lock · f5623b46
      Zheng Tan authored
      ## What changes were proposed in this pull request?
      
      In some cases, fine-grained lock have race condition with class-loader lock and have caused dead lock issue. It is safe to drop this fine grained lock and load all classes by single class-loader lock.
      
      Author: Zheng Tan <zheng.tan@hulu.com>
      
      Closes #12857 from tankkyo/master.
      f5623b46
    • Sandeep Singh's avatar
      [SPARK-15082][CORE] Improve unit test coverage for AccumulatorV2 · 84b3a4a8
      Sandeep Singh authored
      ## What changes were proposed in this pull request?
      Added tests for ListAccumulator and LegacyAccumulatorWrapper, test for ListAccumulator is one similar to old Collection Accumulators
      
      ## How was this patch tested?
      Ran tests locally.
      
      cc rxin
      
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #12862 from techaddict/SPARK-15082.
      84b3a4a8
    • François Garillot's avatar
      [SPARK-9819][STREAMING][DOCUMENTATION] Clarify doc for invReduceFunc in... · 439e3610
      François Garillot authored
      [SPARK-9819][STREAMING][DOCUMENTATION] Clarify doc for invReduceFunc in incremental versions of reduceByWindow
      
      - that reduceFunc and invReduceFunc should be associative
      - that the intermediate result in iterated applications of inverseReduceFunc
        is its first argument
      
      Author: François Garillot <francois@garillot.net>
      
      Closes #8103 from huitseeker/issue/invReduceFuncDoc.
      439e3610
    • Sandeep Singh's avatar
      [SPARK-15087][CORE][SQL] Remove AccumulatorV2.localValue and keep only value · ca813330
      Sandeep Singh authored
      ## What changes were proposed in this pull request?
      Remove AccumulatorV2.localValue and keep only value
      
      ## How was this patch tested?
      existing tests
      
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #12865 from techaddict/SPARK-15087.
      ca813330
    • Shixiong Zhu's avatar
      [SPARK-14860][TESTS] Create a new Waiter in reset to bypass an issue of ScalaTest's Waiter.wait · b545d752
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR updates `QueryStatusCollector.reset` to create Waiter instead of calling `await(1 milliseconds)` to bypass an ScalaTest's issue that Waiter.await may block forever.
      
      ## How was this patch tested?
      
      I created a local stress test to call codes in `test("event ordering")` 100 times. It cannot pass without this patch.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12623 from zsxwing/flaky-test.
      b545d752
    • Tathagata Das's avatar
      [SPARK-14716][SQL] Added support for partitioning in FileStreamSink · 4ad492c4
      Tathagata Das authored
      # What changes were proposed in this pull request?
      
      Support partitioning in the file stream sink. This is implemented using a new, but simpler code path for writing parquet files - both unpartitioned and partitioned. This new code path does not use Output Committers, as we will eventually write the file names to the metadata log for "committing" them.
      
      This patch duplicates < 100 LOC from the WriterContainer. But its far simpler that WriterContainer as it does not involve output committing. In addition, it introduces the new APIs in FileFormat and OutputWriterFactory in an attempt to simplify the APIs (not have Job in the `FileFormat` API, not have bucket and other stuff in the `OutputWriterFactory.newInstance()` ).
      
      # Tests
      - New unit tests to test the FileStreamSinkWriter for partitioned and unpartitioned files
      - New unit test to partially test the FileStreamSink for partitioned files (does not test recovery of partition column data, as that requires change in the StreamFileCatalog, future PR).
      - Updated FileStressSuite to test number of records read from partitioned output files.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #12409 from tdas/streaming-partitioned-parquet.
      4ad492c4
    • Liwei Lin's avatar
      [SPARK-14884][SQL][STREAMING][WEBUI] Fix call site for continuous queries · 5bd9a2f6
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      Since we've been processing continuous queries in separate threads, the call sites are then `run at <unknown>:0`. It's not wrong but provides very little information; in addition, we can not distinguish two queries only from their call sites.
      
      This patch fixes this.
      
      ### Before
      [Jobs Tab]
      ![s1a](https://cloud.githubusercontent.com/assets/15843379/14766101/a47246b2-0a30-11e6-8d81-06a9a600113b.png)
      [SQL Tab]
      ![s1b](https://cloud.githubusercontent.com/assets/15843379/14766102/a4750226-0a30-11e6-9ada-773d977d902b.png)
      ### After
      [Jobs Tab]
      ![s2a](https://cloud.githubusercontent.com/assets/15843379/14766104/a89705b6-0a30-11e6-9830-0d40ec68527b.png)
      [SQL Tab]
      ![s2b](https://cloud.githubusercontent.com/assets/15843379/14766103/a8966728-0a30-11e6-8e4d-c2e326400478.png)
      
      ## How was this patch tested?
      
      Manually checks - see screenshots above.
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12650 from lw-lin/fix-call-site.
      5bd9a2f6
    • Reynold Xin's avatar
      [SPARK-15088] [SQL] Remove SparkSqlSerializer · 5503e453
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch removes SparkSqlSerializer. I believe this is now dead code.
      
      ## How was this patch tested?
      Removed a test case related to it.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12864 from rxin/SPARK-15088.
      5503e453
    • Sun Rui's avatar
      [SPARK-15091][SPARKR] Fix warnings and a failure in SparkR test cases with testthat version 1.0.1 · 8b6491fc
      Sun Rui authored
      ## What changes were proposed in this pull request?
      Fix warnings and a failure in SparkR test cases with testthat version 1.0.1
      
      ## How was this patch tested?
      SparkR unit test cases.
      
      Author: Sun Rui <sunrui2016@gmail.com>
      
      Closes #12867 from sun-rui/SPARK-15091.
      8b6491fc
    • Yanbo Liang's avatar
      [SPARK-14971][ML][PYSPARK] PySpark ML Params setter code clean up · d26f7cb0
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      PySpark ML Params setter code clean up.
      For examples,
      ```setInputCol``` can be simplified from
      ```
      self._set(inputCol=value)
      return self
      ```
      to:
      ```
      return self._set(inputCol=value)
      ```
      This is a pretty big sweeps, and we cleaned wherever possible.
      ## How was this patch tested?
      Exist unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12749 from yanboliang/spark-14971.
      d26f7cb0
    • Dongjoon Hyun's avatar
      [SPARK-15057][GRAPHX] Remove stale TODO comment for making `enum` in GraphGenerators · 46965cd0
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR removes a stale TODO comment in `GraphGenerators.scala`
      
      ## How was this patch tested?
      
      Just comment removed.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12839 from dongjoon-hyun/SPARK-15057.
      46965cd0
    • Sean Owen's avatar
      [SPARK-14897][CORE] Upgrade Jetty to latest version of 8 · 57ac7c18
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Update Jetty 8.1 to the latest 2016/02 release, from a 2013/10 release, for security and bug fixes. This does not resolve the JIRA necessarily, as it's still worth considering an update to 9.3.
      
      ## How was this patch tested?
      
      Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #12842 from srowen/SPARK-14897.
      57ac7c18
    • Reynold Xin's avatar
      [SPARK-15081] Move AccumulatorV2 and subclasses into util package · d557a5e0
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch moves AccumulatorV2 and subclasses into util package.
      
      ## How was this patch tested?
      Updated relevant tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12863 from rxin/SPARK-15081.
      d557a5e0
    • Dongjoon Hyun's avatar
      [SPARK-15053][BUILD] Fix Java Lint errors on Hive-Thriftserver module · a7444570
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This issue fixes or hides 181 Java linter errors introduced by SPARK-14987 which copied hive service code from Hive. We had better clean up these errors before releasing Spark 2.0.
      
      - Fix UnusedImports (15 lines), RedundantModifier (14 lines), SeparatorWrap (9 lines), MethodParamPad (6 lines), FileTabCharacter (5 lines), ArrayTypeStyle (3 lines), ModifierOrder (3 lines), RedundantImport (1 line), CommentsIndentation (1 line), UpperEll (1 line), FallThrough (1 line), OneStatementPerLine (1 line), NewlineAtEndOfFile (1 line) errors.
      - Ignore `LineLength` errors under `hive/service/*` (118 lines).
      - Ignore `MethodName` error in `PasswdAuthenticationProvider.java` (1 line).
      - Ignore `NoFinalizer` error in `ThreadWithGarbageCleanup.java` (1 line).
      
      ## How was this patch tested?
      
      After passing Jenkins building, run `dev/lint-java` manually.
      ```bash
      $ dev/lint-java
      Checkstyle checks passed.
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12831 from dongjoon-hyun/SPARK-15053.
      a7444570
    • Sandeep Singh's avatar
      [MINOR][DOCS] Fix type Information in Quick Start and Programming Guide · dfd9723d
      Sandeep Singh authored
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #12841 from techaddict/improve_docs_1.
      dfd9723d
    • Holden Karau's avatar
      [SPARK-6717][ML] Clear shuffle files after checkpointing in ALS · f10ae4b1
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      When ALS is run with a checkpoint interval, during the checkpoint materialize the current state and cleanup the previous shuffles (non-blocking).
      
      ## How was this patch tested?
      
      Existing ALS unit tests, new ALS checkpoint cleanup unit tests added & shuffle files checked after ALS w/checkpointing run.
      
      Author: Holden Karau <holden@us.ibm.com>
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #11919 from holdenk/SPARK-6717-clear-shuffle-files-after-checkpointing-in-ALS.
      f10ae4b1
    • Andrew Ray's avatar
      [SPARK-13749][SQL][FOLLOW-UP] Faster pivot implementation for many distinct... · d8f528ce
      Andrew Ray authored
      [SPARK-13749][SQL][FOLLOW-UP] Faster pivot implementation for many distinct values with two phase aggregation
      
      ## What changes were proposed in this pull request?
      
      This is a follow up PR for #11583. It makes 3 lazy vals into just vals and adds unit test coverage.
      
      ## How was this patch tested?
      
      Existing unit tests and additional unit tests.
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #12861 from aray/fast-pivot-follow-up.
      d8f528ce
  3. May 02, 2016
Loading