Skip to content
Snippets Groups Projects
  1. May 29, 2015
    • Reynold Xin's avatar
      [SPARK-7929] Turn whitespace checker on for more token types. · 97a60cf7
      Reynold Xin authored
      This is the last batch of changes to complete SPARK-7929.
      
      Previous related PRs:
      https://github.com/apache/spark/pull/6480
      https://github.com/apache/spark/pull/6478
      https://github.com/apache/spark/pull/6477
      https://github.com/apache/spark/pull/6476
      https://github.com/apache/spark/pull/6475
      https://github.com/apache/spark/pull/6474
      https://github.com/apache/spark/pull/6473
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6487 from rxin/whitespace-lint and squashes the following commits:
      
      b33d43d [Reynold Xin] [SPARK-7929] Turn whitespace checker on for more token types.
      97a60cf7
    • Patrick Wendell's avatar
      36067ce3
    • Tathagata Das's avatar
      [SPARK-7931] [STREAMING] Do not restart receiver when stopped · e714ecf2
      Tathagata Das authored
      Attempts to restart the socket receiver when it is supposed to be stopped causes undesirable error messages.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #6483 from tdas/SPARK-7931 and squashes the following commits:
      
      09aeee1 [Tathagata Das] Do not restart receiver when stopped
      e714ecf2
    • Xiangrui Meng's avatar
      [SPARK-7922] [MLLIB] use DataFrames for user/item factors in ALSModel · db951378
      Xiangrui Meng authored
      Expose user/item factors in DataFrames. This is to be more consistent with the pipeline API. It also helps maintain consistent APIs across languages. This PR also removed fitting params from `ALSModel`.
      
      coderxiang
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6468 from mengxr/SPARK-7922 and squashes the following commits:
      
      7bfb1d5 [Xiangrui Meng] update ALSModel in PySpark
      1ba5607 [Xiangrui Meng] use DataFrames for user/item factors in ALS
      db951378
    • Tathagata Das's avatar
      [SPARK-7930] [CORE] [STREAMING] Fixed shutdown hook priorities · cd3d9a5c
      Tathagata Das authored
      Shutdown hook for temp directories had priority 100 while SparkContext was 50. So the local root directory was deleted before SparkContext was shutdown. This leads to scary errors on running jobs, at the time of shutdown. This is especially a problem when running streaming examples, where Ctrl-C is the only way to shutdown.
      
      The fix in this PR is to make the temp directory shutdown priority lower than SparkContext, so that the temp dirs are the last thing to get deleted, after the SparkContext has been shut down. Also, the DiskBlockManager shutdown priority is change from default 100 to temp_dir_prio + 1, so that it gets invoked just before all temp dirs are cleared.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #6482 from tdas/SPARK-7930 and squashes the following commits:
      
      d7cbeb5 [Tathagata Das] Removed unnecessary line
      1514d0b [Tathagata Das] Fixed shutdown hook priorities
      cd3d9a5c
    • Kay Ousterhout's avatar
      [SPARK-7932] Fix misleading scheduler delay visualization · 04ddcd4d
      Kay Ousterhout authored
      The existing code rounds down to the nearest percent when computing the proportion
      of a task's time that was spent on each phase of execution, and then computes
      the scheduler delay proportion as 100 - sum(all other proportions).  As a result,
      a few extra percent can end up in the scheduler delay. This commit eliminates
      the rounding so that the time visualizations correspond properly to the real times.
      
      sarutak If you could take a look at this, that would be great! Not sure if there's a good
      reason to round here that I missed.
      
      cc shivaram
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #6484 from kayousterhout/SPARK-7932 and squashes the following commits:
      
      1723cc4 [Kay Ousterhout] [SPARK-7932] Fix misleading scheduler delay visualization
      04ddcd4d
  2. May 28, 2015
    • Xiangrui Meng's avatar
      [MINOR] fix RegressionEvaluator doc · 834e6995
      Xiangrui Meng authored
      `make clean html` under `python/doc` returns
      ~~~
      /Users/meng/src/spark/python/pyspark/ml/evaluation.py:docstring of pyspark.ml.evaluation.RegressionEvaluator.setParams:3: WARNING: Definition list ends without a blank line; unexpected unindent.
      ~~~
      
      harsha2010
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6469 from mengxr/fix-regression-evaluator-doc and squashes the following commits:
      
      91e2dad [Xiangrui Meng] fix RegressionEvaluator doc
      834e6995
    • Xiangrui Meng's avatar
      [SPARK-7926] [PYSPARK] use the official Pyrolite release · c45d58c1
      Xiangrui Meng authored
      Switch to the official Pyrolite release from the one published under `org.spark-project`. Thanks irmen for making the releases on Maven Central. We didn't upgrade to 4.6 because we don't have enough time for QA. I excludes `serpent` from its dependencies because we don't use it in Spark.
      ~~~
      [info]   +-net.jpountz.lz4:lz4:1.3.0
      [info]   +-net.razorvine:pyrolite:4.4
      [info]   +-net.sf.py4j:py4j:0.8.2.1
      ~~~
      
      davies
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6472 from mengxr/SPARK-7926 and squashes the following commits:
      
      7b3c6bf [Xiangrui Meng] use the official Pyrolite release
      c45d58c1
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for GraphX. · b069ad23
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6474 from rxin/whitespace-graphx and squashes the following commits:
      
      4d3cd26 [Reynold Xin] Fixed tests.
      869dde4 [Reynold Xin] [SPARK-7927] whitespace fixes for GraphX.
      b069ad23
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for core. · 7f7505d8
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6473 from rxin/whitespace-core and squashes the following commits:
      
      058195d [Reynold Xin] Fixed tests.
      fce11e9 [Reynold Xin] [SPARK-7927] whitespace fixes for core.
      7f7505d8
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for Catalyst module. · 8da560d7
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6476 from rxin/whitespace-catalyst and squashes the following commits:
      
      650409d [Reynold Xin] Fixed tests.
      51a9e5d [Reynold Xin] [SPARK-7927] whitespace fixes for Catalyst module.
      8da560d7
    • Reynold Xin's avatar
      [SPARK-7929] Remove Bagel examples & whitespace fix for examples. · 2881d14c
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6480 from rxin/whitespace-example and squashes the following commits:
      
      8a4a3d4 [Reynold Xin] [SPARK-7929] Remove Bagel examples & whitespace fix for examples.
      2881d14c
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for SQL core. · ff44c711
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6477 from rxin/whitespace-sql-core and squashes the following commits:
      
      ce6e369 [Reynold Xin] Fixed tests.
      6095fed [Reynold Xin] [SPARK-7927] whitespace fixes for SQL core.
      ff44c711
    • Xiangrui Meng's avatar
      [SPARK-7927] [MLLIB] Enforce whitespace for more tokens in style checker · 04616b1a
      Xiangrui Meng authored
      rxin
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6481 from mengxr/mllib-scalastyle and squashes the following commits:
      
      3ca4d61 [Xiangrui Meng] revert scalastyle config
      30961ba [Xiangrui Meng] adjust spaces in mllib/test
      571b5c5 [Xiangrui Meng] fix spaces in mllib
      04616b1a
    • Takuya UESHIN's avatar
      [SPARK-7826] [CORE] Suppress extra calling getCacheLocs. · 9b692bfd
      Takuya UESHIN authored
      There are too many extra call method `getCacheLocs` for `DAGScheduler`, which includes Akka communication.
      To improve `DAGScheduler` performance, suppress extra calling the method.
      
      In my application with over 1200 stages, the execution time became 3.8 min from 8.5 min with my patch.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #6352 from ueshin/issues/SPARK-7826 and squashes the following commits:
      
      3d4d036 [Takuya UESHIN] Modify a test and the documentation.
      10b1b22 [Takuya UESHIN] Simplify the unit test.
      d858b59 [Takuya UESHIN] Move the storageLevel check inside the if (!cacheLocs.contains(rdd.id)) block.
      6f3125c [Takuya UESHIN] Fix scalastyle.
      b9c835c [Takuya UESHIN] Put the condition that checks if the RDD has uncached partition or not into variable for readability.
      f87f2ec [Takuya UESHIN] Get cached locations from block manager only if the storage level of the RDD is not StorageLevel.NONE.
      8248386 [Takuya UESHIN] Revert "Suppress extra calling getCacheLocs."
      a4d944a [Takuya UESHIN] Add an unit test.
      9a80fad [Takuya UESHIN] Suppress extra calling getCacheLocs.
      9b692bfd
    • Kay Ousterhout's avatar
      [SPARK-7933] Remove Patrick's username/pw from merge script · 66c49ed6
      Kay Ousterhout authored
      Looks like this was added by accident when pwendell merged a commit back in September: fe2b1d6a
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #6485 from kayousterhout/SPARK-7933 and squashes the following commits:
      
      7c6164a [Kay Ousterhout] [SPARK-7933] Remove Patrick's username/pw from merge script
      66c49ed6
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for Hive and ThriftServer. · ee6a0e12
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6478 from rxin/whitespace-hive and squashes the following commits:
      
      e01b0e0 [Reynold Xin] Fixed tests.
      a3bba22 [Reynold Xin] [SPARK-7927] whitespace fixes for Hive and ThriftServer.
      ee6a0e12
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for streaming. · 3af0b313
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6475 from rxin/whitespace-streaming and squashes the following commits:
      
      810dae4 [Reynold Xin] Fixed tests.
      89068ad [Reynold Xin] [SPARK-7927] whitespace fixes for streaming.
      3af0b313
    • Xusen Yin's avatar
      [SPARK-7577] [ML] [DOC] add bucketizer doc · 1bd63e82
      Xusen Yin authored
      CC jkbradley
      
      Author: Xusen Yin <yinxusen@gmail.com>
      
      Closes #6451 from yinxusen/SPARK-7577 and squashes the following commits:
      
      e2dc32e [Xusen Yin] rename colums
      e350e49 [Xusen Yin] add all demos
      006ddf1 [Xusen Yin] add java test
      3238481 [Xusen Yin] add bucketizer
      1bd63e82
    • Yin Huai's avatar
      [SPARK-7853] [SQL] Fix HiveContext in Spark Shell · 572b62ca
      Yin Huai authored
      https://issues.apache.org/jira/browse/SPARK-7853
      
      This fixes the problem introduced by my change in https://github.com/apache/spark/pull/6435, which causes that Hive Context fails to create in spark shell because of the class loader issue.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6459 from yhuai/SPARK-7853 and squashes the following commits:
      
      37ad33e [Yin Huai] Do not use hiveQlTable at all.
      47cdb6d [Yin Huai] Move hiveconf.set to the end of setConf.
      005649b [Yin Huai] Update comment.
      35d86f3 [Yin Huai] Access TTable directly to make sure Hive will not internally use any metastore utility functions.
      3737766 [Yin Huai] Recursively find all jars.
      572b62ca
    • Reynold Xin's avatar
      Remove SizeEstimator from o.a.spark package. · 0077af22
      Reynold Xin authored
      See comments on https://github.com/apache/spark/pull/3913
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6471 from rxin/sizeestimator and squashes the following commits:
      
      c057095 [Reynold Xin] Fixed import.
      2da478b [Reynold Xin] Remove SizeEstimator from o.a.spark package.
      0077af22
    • Xiangrui Meng's avatar
      [SPARK-7198] [MLLIB] VectorAssembler should output ML attributes · 7859ab65
      Xiangrui Meng authored
      `VectorAssembler` should carry over ML attributes. For unknown attributes, we assume numeric values. This PR handles the following cases:
      
      1. DoubleType with ML attribute: carry over
      2. DoubleType without ML attribute: numeric value
      3. Scalar type: numeric value
      4. VectorType with all ML attributes: carry over and update names
      5. VectorType with number of ML attributes: assume all numeric
      6. VectorType without ML attributes: check the first row and get the number of attributes
      
      jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6452 from mengxr/SPARK-7198 and squashes the following commits:
      
      a9d2469 [Xiangrui Meng] add space
      facdb1f [Xiangrui Meng] VectorAssembler should output ML attributes
      7859ab65
    • Mike Dusenberry's avatar
      [DOCS] Fixing broken "IDE setup" link in the Building Spark documentation. · 3e312a5e
      Mike Dusenberry authored
      The location of the IDE setup information has changed, so this just updates the link on the Building Spark page.
      
      Author: Mike Dusenberry <dusenberrymw@gmail.com>
      
      Closes #6467 from dusenberrymw/Fix_Broken_Link_On_Building_Spark_Doc and squashes the following commits:
      
      75c533a [Mike Dusenberry] Fixing broken "IDE setup" link in the Building Spark documentation by pointing to new location.
      3e312a5e
    • Li Yao's avatar
      [MINOR] Fix the a minor bug in PageRank Example. · c771589c
      Li Yao authored
      Fix the bug that entering only 1 arg will cause array out of bounds exception in PageRank example.
      
      Author: Li Yao <hnkfliyao@gmail.com>
      
      Closes #6455 from lastland/patch-1 and squashes the following commits:
      
      de06128 [Li Yao] Fix the bug that entering only 1 arg will cause array out of bounds exception.
      c771589c
    • Xiangrui Meng's avatar
      [SPARK-7911] [MLLIB] A workaround for VectorUDT serialize (or deserialize)... · 530efe3e
      Xiangrui Meng authored
      [SPARK-7911] [MLLIB] A workaround for VectorUDT serialize (or deserialize) being called multiple times
      
      ~~A PythonUDT shouldn't be serialized into external Scala types in PythonRDD. I'm not sure whether this should fix one of the bugs related to SQL UDT/UDF in PySpark.~~
      
      The fix above didn't work. So I added a workaround for this. If a Python UDF is applied to a Python UDT. This will put the Python SQL types as inputs. Still incorrect, but at least it doesn't throw exceptions on the Scala side. davies harsha2010
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6442 from mengxr/SPARK-7903 and squashes the following commits:
      
      c257d2a [Xiangrui Meng] add a workaround for VectorUDT
      530efe3e
    • zsxwing's avatar
      [SPARK-7895] [STREAMING] [EXAMPLES] Move Kafka examples from scala-2.10/src to src · 000df2f0
      zsxwing authored
      Since `spark-streaming-kafka` now is published for both Scala 2.10 and 2.11, we can move `KafkaWordCount` and `DirectKafkaWordCount` from `examples/scala-2.10/src/` to `examples/src/` so that they will appear in `spark-examples-***-jar` for Scala 2.11.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #6436 from zsxwing/SPARK-7895 and squashes the following commits:
      
      c6052f1 [zsxwing] Update examples/pom.xml
      0bcfa87 [zsxwing] Fix the sleep time
      b9d1256 [zsxwing] Move Kafka examples from scala-2.10/src to src
      000df2f0
    • zuxqoj's avatar
      [SPARK-7782] fixed sort arrow issue · e838a25b
      zuxqoj authored
      Current behaviour::
      In spark UI
      ![screen shot 2015-05-27 at 3 27 51 pm](https://cloud.githubusercontent.com/assets/3919211/7837541/47d330ba-04a5-11e5-89d1-e5b11da1a513.png)
      
      In YARN
      ![screen shot 2015-05-27 at 3](https://cloud.githubusercontent.com/assets/3919211/7837594/aebd1d36-04a5-11e5-8216-86e03c07d2bd.png)
      
      In jira
      ![screen shot 2015-05-27 at 3_2](https://cloud.githubusercontent.com/assets/3919211/7837616/d3fedce2-04a5-11e5-9e68-960ed54e5d83.png)
      
      Author: zuxqoj <sbshekhar@gmail.com>
      
      Closes #6437 from zuxqoj/SPARK-7782_PR and squashes the following commits:
      
      cd068b9 [zuxqoj] [SPARK-7782] fixed sort arrow issue
      e838a25b
    • Matt Wise's avatar
      [DOCS] Fix typo in documentation for Java UDF registration · 35410614
      Matt Wise authored
      This contribution is my original work and I license the work to the project under the project's open source license
      
      Author: Matt Wise <mwise@quixey.com>
      
      Closes #6447 from wisematthew/fix-typo-in-java-udf-registration-doc and squashes the following commits:
      
      e7ef5f7 [Matt Wise] Fix typo in documentation for Java UDF registration
      35410614
    • Sandy Ryza's avatar
      [SPARK-7896] Allow ChainedBuffer to store more than 2 GB · bd11b01e
      Sandy Ryza authored
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #6440 from sryza/sandy-spark-7896 and squashes the following commits:
      
      49d8a0d [Sandy Ryza] Fix bug introduced when reading over record boundaries
      6006856 [Sandy Ryza] Fix overflow issues
      006b4b2 [Sandy Ryza] Fix scalastyle by removing non ascii characters
      8b000ca [Sandy Ryza] Add ascii art to describe layout of data in metaBuffer
      f2053c0 [Sandy Ryza] Fix negative overflow issue
      0368c78 [Sandy Ryza] Initialize size as 0
      a5a4820 [Sandy Ryza] Use explicit types for all numbers in ChainedBuffer
      b7e0213 [Sandy Ryza] SPARK-7896. Allow ChainedBuffer to store more than 2 GB
      bd11b01e
  3. May 27, 2015
    • Josh Rosen's avatar
      [SPARK-7873] Allow KryoSerializerInstance to create multiple streams at the same time · 852f4de2
      Josh Rosen authored
      This is a somewhat obscure bug, but I think that it will seriously impact KryoSerializer users who use custom registrators which disabled auto-reset. When auto-reset is disabled, then this breaks things in some of our shuffle paths which actually end up creating multiple OutputStreams from the same shared SerializerInstance (which is unsafe).
      
      This was introduced by a patch (SPARK-3386) which enables serializer re-use in some of the shuffle paths, since constructing new serializer instances is actually pretty costly for KryoSerializer.  We had already fixed another corner-case (SPARK-7766) bug related to this, but missed this one.
      
      I think that the root problem here is that KryoSerializerInstance can be used in a way which is unsafe even within a single thread, e.g. by creating multiple open OutputStreams from the same instance or by interleaving deserialize and deserializeStream calls. I considered a smaller patch which adds assertions to guard against this type of "misuse" but abandoned that approach after I realized how convoluted the Scaladoc became.
      
      This patch fixes this bug by making it legal to create multiple streams from the same KryoSerializerInstance.  Internally, KryoSerializerInstance now implements a  `borrowKryo()` / `releaseKryo()` API that's backed by a "pool" of capacity 1. Each call to a KryoSerializerInstance method will borrow the Kryo, do its work, then release the serializer instance back to the pool. If the pool is empty and we need an instance, it will allocate a new Kryo on-demand. This makes it safe for multiple OutputStreams to be opened from the same serializer. If we try to release a Kryo back to the pool but the pool already contains a Kryo, then we'll just discard the new Kryo. I don't think there's a clear benefit to having a larger pool since our usages tend to fall into two cases, a) where we only create a single OutputStream and b) where we create a huge number of OutputStreams with the same lifecycle, then destroy the KryoSerializerInstance (this is what's happening in the bypassMergeSort code path that my regression test hits).
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #6415 from JoshRosen/SPARK-7873 and squashes the following commits:
      
      00b402e [Josh Rosen] Initialize eagerly to fix a failing test
      ba55d20 [Josh Rosen] Add explanatory comments
      3f1da96 [Josh Rosen] Guard against duplicate close()
      ab457ca [Josh Rosen] Sketch a loan/release based solution.
      9816e8f [Josh Rosen] Add a failing test showing how deserialize() and deserializeStream() can interfere.
      7350886 [Josh Rosen] Add failing regression test for SPARK-7873
      852f4de2
    • Yin Huai's avatar
      [SPARK-7907] [SQL] [UI] Rename tab ThriftServer to SQL. · 3c1f1baa
      Yin Huai authored
      This PR has three changes:
      1. Renaming the table of `ThriftServer` to `SQL`;
      2. Renaming the title of the tab from `ThriftServer` to `JDBC/ODBC Server`; and
      3. Renaming the title of the session page from `ThriftServer` to `JDBC/ODBC Session`.
      
      https://issues.apache.org/jira/browse/SPARK-7907
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6448 from yhuai/JDBCServer and squashes the following commits:
      
      eadcc3d [Yin Huai] Update test.
      9168005 [Yin Huai] Use SQL as the tab name.
      221831e [Yin Huai] Rename ThriftServer to JDBCServer.
      3c1f1baa
    • Liang-Chi Hsieh's avatar
      [SPARK-7897][SQL] Use DecimalType to represent unsigned bigint in JDBCRDD · a1e092ea
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-7897
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #6438 from viirya/jdbc_unsigned_bigint and squashes the following commits:
      
      ccb3c3f [Liang-Chi Hsieh] Use DecimalType to represent unsigned bigint.
      a1e092ea
    • Cheng Hao's avatar
      [SPARK-7853] [SQL] Fixes a class loader issue in Spark SQL · db3fd054
      Cheng Hao authored
      This PR is based on PR #6396 authored by chenghao-intel. Essentially, Spark SQL should use context classloader to load SerDe classes.
      
      yhuai helped updating the test case, and I fixed a bug in the original `CliSuite`: while testing the CLI tool with `runCliWithin`, we don't append `\n` to the last query, thus the last query is never executed.
      
      Original PR description is pasted below.
      
      ----
      
      ```
      bin/spark-sql --jars ./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
      CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe';
      ```
      
      Throws exception like
      
      ```
      15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
      org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe
              at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
              at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
              at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
              at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
              at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
              at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
              at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
              at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
              at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
              at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
              at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
              at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
              at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
              at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
              at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
              at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
              at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:147)
              at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:131)
              at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
              at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
              at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
      ```
      
      Author: Cheng Hao <hao.cheng@intel.com>
      Author: Cheng Lian <lian@databricks.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6435 from liancheng/classLoader and squashes the following commits:
      
      d4c4845 [Cheng Lian] Fixes CliSuite
      75e80e2 [Yin Huai] Update the fix.
      fd26533 [Cheng Hao] scalastyle
      dd78775 [Cheng Hao] workaround for classloader of IsolatedClientLoader
      db3fd054
    • Cheng Lian's avatar
      [SPARK-7684] [SQL] Refactoring MetastoreDataSourcesSuite to workaround SPARK-7684 · b97ddff0
      Cheng Lian authored
      As stated in SPARK-7684, currently `TestHive.reset` has some execution order specific bug, which makes running specific test suites locally pretty frustrating. This PR refactors `MetastoreDataSourcesSuite` (which relies on `TestHive.reset` heavily) using various `withXxx` utility methods in `SQLTestUtils` to ask each test case to cleanup their own mess so that we can avoid calling `TestHive.reset`.
      
      Author: Cheng Lian <lian@databricks.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6353 from liancheng/workaround-spark-7684 and squashes the following commits:
      
      26939aa [Yin Huai] Move the initialization of jsonFilePath to beforeAll.
      a423d48 [Cheng Lian] Fixes Scala style issue
      dfe45d0 [Cheng Lian] Refactors MetastoreDataSourcesSuite to workaround SPARK-7684
      92a116d [Cheng Lian] Fixes minor styling issues
      b97ddff0
    • Daoyuan Wang's avatar
      [SPARK-7790] [SQL] date and decimal conversion for dynamic partition key · 8161562e
      Daoyuan Wang authored
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #6318 from adrian-wang/dynpart and squashes the following commits:
      
      ad73b61 [Daoyuan Wang] not use sqlTestUtils for try catch because dont have sqlcontext here
      6c33b51 [Daoyuan Wang] fix according to liancheng
      f0f8074 [Daoyuan Wang] some specific types as dynamic partition
      8161562e
    • Reynold Xin's avatar
      Removed Guava dependency from JavaTypeInference's type signature. · 6fec1a94
      Reynold Xin authored
      This should also close #6243.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6431 from rxin/JavaTypeInference-guava and squashes the following commits:
      
      e58df3c [Reynold Xin] Removed Gauva dependency from JavaTypeInference's type signature.
      6fec1a94
    • Kousuke Saruta's avatar
      [SPARK-7864] [UI] Fix the logic grabbing the link from table in AllJobPage · 0db76c90
      Kousuke Saruta authored
      This issue is related to #6419 .
      Now AllJobPage doesn't have a "kill link" but I think fix the issue mentioned in #6419 just in case to avoid accidents in the future.
      
      So, it's minor issue for now and I don't file this issue in JIRA.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #6432 from sarutak/remove-ambiguity-of-link and squashes the following commits:
      
      cd1a503 [Kousuke Saruta] Fixed ambiguity link issue in AllJobPage
      0db76c90
    • Cheng Lian's avatar
      [SPARK-7847] [SQL] Fixes dynamic partition directory escaping · 15459db4
      Cheng Lian authored
      Please refer to [SPARK-7847] [1] for details.
      
      [1]: https://issues.apache.org/jira/browse/SPARK-7847
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6389 from liancheng/spark-7847 and squashes the following commits:
      
      935c652 [Cheng Lian] Adds test case for writing various data types as dynamic partition value
      f4fc398 [Cheng Lian] Converts partition columns to Scala type when writing dynamic partitions
      d0aeca0 [Cheng Lian] Fixes dynamic partition directory escaping
      15459db4
    • Kay Ousterhout's avatar
      [SPARK-7878] Rename Stage.jobId to firstJobId · ff0ddff4
      Kay Ousterhout authored
      The previous name was confusing, because each stage can be associated with
      many jobs, and jobId is just the ID of the first job that was associated
      with the Stage. This commit also renames some of the method parameters in
      DAGScheduler.scala to clarify when the jobId refers to the first job ID
      associated with the stage (as opposed to the jobId associated with a job
      that's currently being scheduled).
      
      cc markhamstra JoshRosen (hopefully this will help prevent future bugs like SPARK-6880)
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #6418 from kayousterhout/SPARK-7878 and squashes the following commits:
      
      b71a9b8 [Kay Ousterhout] [SPARK-7878] Rename Stage.jobId to firstJobId
      ff0ddff4
    • scwf's avatar
      [CORE] [TEST] HistoryServerSuite failed due to timezone issue · 4615081d
      scwf authored
      follow up for #6377
      Change time to the equivalent in GMT
      /cc squito
      
      Author: scwf <wangfei1@huawei.com>
      
      Closes #6425 from scwf/fix-HistoryServerSuite and squashes the following commits:
      
      4d37935 [scwf] fix HistoryServerSuite
      4615081d
Loading