Skip to content
Snippets Groups Projects
  1. Sep 11, 2015
    • Xiangrui Meng's avatar
      [SPARK-10537] [ML] document LIBSVM source options in public API doc and some minor improvements · 960d2d0a
      Xiangrui Meng authored
      We should document options in public API doc. Otherwise, it is hard to find out the options without looking at the code. I tried to make `DefaultSource` private and put the documentation to package doc. However, since then there exists no public class under `source.libsvm`, the Java package doc doesn't show up in the generated html file (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4492654). So I put the doc to `DefaultSource` instead. There are several minor updates in this PR:
      
      1. Do `vectorType == "sparse"` only once.
      2. Update `hashCode` and `equals`.
      3. Remove inherited doc.
      4. Delete temp dir in `afterAll`.
      
      Lewuathe
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #8699 from mengxr/SPARK-10537.
      960d2d0a
    • Yanbo Liang's avatar
      [SPARK-9773] [ML] [PySpark] Add Python API for MultilayerPerceptronClassifier · b01b2626
      Yanbo Liang authored
      Add Python API for ```MultilayerPerceptronClassifier```.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8067 from yanboliang/SPARK-9773.
      b01b2626
    • Yanbo Liang's avatar
      [SPARK-10026] [ML] [PySpark] Implement some common Params for regression in PySpark · b656e613
      Yanbo Liang authored
      LinearRegression and LogisticRegression lack of some Params for Python, and some Params are not shared classes which lead we need to write them for each class. These kinds of Params are list here:
      ```scala
      HasElasticNetParam
      HasFitIntercept
      HasStandardization
      HasThresholds
      ```
      Here we implement them in shared params at Python side and make LinearRegression/LogisticRegression parameters peer with Scala one.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8508 from yanboliang/spark-10026.
      b656e613
    • y-shimizu's avatar
      [SPARK-10518] [DOCS] Update code examples in spark.ml user guide to use LIBSVM... · c268ca4d
      y-shimizu authored
      [SPARK-10518] [DOCS] Update code examples in spark.ml user guide to use LIBSVM data source instead of MLUtils
      
      I fixed to use LIBSVM data source in the example code in spark.ml instead of MLUtils
      
      Author: y-shimizu <y.shimizu0429@gmail.com>
      
      Closes #8697 from y-shimizu/SPARK-10518.
      c268ca4d
    • Ahir Reddy's avatar
      [SPARK-10556] Remove explicit Scala version for sbt project build files · 9bbe33f3
      Ahir Reddy authored
      Previously, project/plugins.sbt explicitly set scalaVersion to 2.10.4. This can cause issues when using a version of sbt that is compiled against a different version of Scala (for example sbt 0.13.9 uses 2.10.5). Removing this explicit setting will cause build files to be compiled and run against the same version of Scala that sbt is compiled against.
      
      Note that this only applies to the project build files (items in project/), it is distinct from the version of Scala we target for the actual spark compilation.
      
      Author: Ahir Reddy <ahirreddy@gmail.com>
      
      Closes #8709 from ahirreddy/sbt-scala-version-fix.
      9bbe33f3
    • Cheng Lian's avatar
      [SPARK-10472] [SQL] Fixes DataType.typeName for UDT · e1d7f642
      Cheng Lian authored
      Before this fix, `MyDenseVectorUDT.typeName` gives `mydensevecto`, which is not desirable.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #8640 from liancheng/spark-10472/udt-type-name.
      e1d7f642
  2. Sep 10, 2015
    • Yanbo Liang's avatar
      [SPARK-10027] [ML] [PySpark] Add Python API missing methods for ml.feature · a140dd77
      Yanbo Liang authored
      Missing method of ml.feature are listed here:
      ```StringIndexer``` lacks of parameter ```handleInvalid```.
      ```StringIndexerModel``` lacks of method ```labels```.
      ```VectorIndexerModel``` lacks of methods ```numFeatures``` and ```categoryMaps```.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8313 from yanboliang/spark-10027.
      a140dd77
    • Yanbo Liang's avatar
      [SPARK-10023] [ML] [PySpark] Unified DecisionTreeParams checkpointInterval... · 339a5271
      Yanbo Liang authored
      [SPARK-10023] [ML] [PySpark] Unified DecisionTreeParams checkpointInterval between Scala and Python API.
      
      "checkpointInterval" is member of DecisionTreeParams in Scala API which is inconsistency with Python API, we should unified them.
      ```
      member of DecisionTreeParams <-> Scala API
      shared param for all ML Transformer/Estimator <-> Python API
      ```
      Proposal:
      "checkpointInterval" is also used by ALS, so we make it shared params at Scala.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8528 from yanboliang/spark-10023.
      339a5271
    • Matt Massie's avatar
      [SPARK-9043] Serialize key, value and combiner classes in ShuffleDependency · 0eabea8a
      Matt Massie authored
      ShuffleManager implementations are currently not given type information for
      the key, value and combiner classes. Serialization of shuffle objects relies
      on objects being JavaSerializable, with methods defined for reading/writing
      the object or, alternatively, serialization via Kryo which uses reflection.
      
      Serialization systems like Avro, Thrift and Protobuf generate classes with
      zero argument constructors and explicit schema information
      (e.g. IndexedRecords in Avro have get, put and getSchema methods).
      
      By serializing the key, value and combiner class names in ShuffleDependency,
      shuffle implementations will have access to schema information when
      registerShuffle() is called.
      
      Author: Matt Massie <massie@cs.berkeley.edu>
      
      Closes #7403 from massie/shuffle-classtags.
      0eabea8a
    • Yanbo Liang's avatar
      [SPARK-7544] [SQL] [PySpark] pyspark.sql.types.Row implements __getitem__ · 89562a17
      Yanbo Liang authored
      pyspark.sql.types.Row implements ```__getitem__```
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8333 from yanboliang/spark-7544.
      89562a17
    • Shivaram Venkataraman's avatar
      Add 1.5 to master branch EC2 scripts · 42047577
      Shivaram Venkataraman authored
      This change brings it to par with `branch-1.5` (and 1.5.0 release)
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #8704 from shivaram/ec2-1.5-update.
      42047577
    • Andrew Or's avatar
      [SPARK-10443] [SQL] Refactor SortMergeOuterJoin to reduce duplication · 3db72554
      Andrew Or authored
      `LeftOutputIterator` and `RightOutputIterator` are symmetrically identical and can share a lot of code. If someone makes a change in one but forgets to do the same thing in the other we'll end up with inconsistent behavior. This patch also adds inline comments to clarify the intention of the code.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #8596 from andrewor14/smoj-cleanup.
      3db72554
    • Sun Rui's avatar
      [SPARK-10049] [SPARKR] Support collecting data of ArraryType in DataFrame. · 45e3be5c
      Sun Rui authored
      this PR :
      1.  Enhance reflection in RBackend. Automatically matching a Java array to Scala Seq when finding methods. Util functions like seq(), listToSeq() in R side can be removed, as they will conflict with the Serde logic that transferrs a Scala seq to R side.
      
      2.  Enhance the SerDe to support transferring  a Scala seq to R side. Data of ArrayType in DataFrame
      after collection is observed to be of Scala Seq type.
      
      3.  Support ArrayType in createDataFrame().
      
      Author: Sun Rui <rui.sun@intel.com>
      
      Closes #8458 from sun-rui/SPARK-10049.
      45e3be5c
    • zsxwing's avatar
      [SPARK-9990] [SQL] Create local hash join operator · d88abb7e
      zsxwing authored
      This PR includes the following changes:
      - Add SQLConf to LocalNode
      - Add HashJoinNode
      - Add ConvertToUnsafeNode and ConvertToSafeNode.scala to test unsafe hash join.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #8535 from zsxwing/SPARK-9990.
      d88abb7e
    • Akash Mishra's avatar
      [SPARK-10514] [MESOS] waiting for min no of total cores acquired by Spark by... · a5ef2d06
      Akash Mishra authored
      [SPARK-10514] [MESOS] waiting for min no of total cores acquired by Spark by implementing the sufficientResourcesRegistered method
      
      spark.scheduler.minRegisteredResourcesRatio configuration parameter works for YARN mode but not for Mesos Coarse grained mode.
      
      If the parameter specified default value of 0 will be set for spark.scheduler.minRegisteredResourcesRatio in base class and this method will always return true.
      
      There are no existing test for YARN mode too. Hence not added test for the same.
      
      Author: Akash Mishra <akash.mishra20@gmail.com>
      
      Closes #8672 from SleepyThread/master.
      a5ef2d06
    • Iulian Dragos's avatar
      [SPARK-6350] [MESOS] Fine-grained mode scheduler respects mesosExecutor.cores · f0562e8c
      Iulian Dragos authored
      This is a regression introduced in #4960, this commit fixes it and adds a test.
      
      tnachen andrewor14 please review, this should be an easy one.
      
      Author: Iulian Dragos <jaguarul@gmail.com>
      
      Closes #8653 from dragos/issue/mesos/fine-grained-maxExecutorCores.
      f0562e8c
    • mcheah's avatar
      [SPARK-8167] Make tasks that fail from YARN preemption not fail job · af3bc59d
      mcheah authored
      The architecture is that, in YARN mode, if the driver detects that an executor has disconnected, it asks the ApplicationMaster why the executor died. If the ApplicationMaster is aware that the executor died because of preemption, all tasks associated with that executor are not marked as failed. The executor
      is still removed from the driver's list of available executors, however.
      
      There's a few open questions:
      1. Should standalone mode have a similar "get executor loss reason" as well? I localized this change as much as possible to affect only YARN, but there could be a valid case to differentiate executor losses in standalone mode as well.
      2. I make a pretty strong assumption in YarnAllocator that getExecutorLossReason(executorId) will only be called once per executor id; I do this so that I can remove the metadata from the in-memory map to avoid object accumulation. It's not clear if I'm being overly zealous to save space, however.
      
      cc vanzin specifically for review because it collided with some earlier YARN scheduling work.
      cc JoshRosen because it's similar to output commit coordination we did in the past
      cc andrewor14 for our discussion on how to get executor exit codes and loss reasons
      
      Author: mcheah <mcheah@palantir.com>
      
      Closes #8007 from mccheah/feature/preemption-handling.
      af3bc59d
    • Holden Karau's avatar
      [SPARK-10469] [DOC] Try and document the three options · a76bde9d
      Holden Karau authored
      From JIRA:
      Add documentation for tungsten-sort.
      From the mailing list "I saw a new "spark.shuffle.manager=tungsten-sort" implemented in
      https://issues.apache.org/jira/browse/SPARK-7081, but it can't be found its
      corresponding description in
      http://people.apache.org/~pwendell/spark-releases/spark-1.5.0-rc3-docs/configuration.html(Currenlty
      there are only 'sort' and 'hash' two options)."
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #8638 from holdenk/SPARK-10469-document-tungsten-sort.
      a76bde9d
    • Cheng Hao's avatar
      [SPARK-10466] [SQL] UnsafeRow SerDe exception with data spill · e0481113
      Cheng Hao authored
      Data Spill with UnsafeRow causes assert failure.
      
      ```
      java.lang.AssertionError: assertion failed
      	at scala.Predef$.assert(Predef.scala:165)
      	at org.apache.spark.sql.execution.UnsafeRowSerializerInstance$$anon$2.writeKey(UnsafeRowSerializer.scala:75)
      	at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:180)
      	at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$2$$anonfun$apply$1.apply(ExternalSorter.scala:688)
      	at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$2$$anonfun$apply$1.apply(ExternalSorter.scala:687)
      	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      	at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$2.apply(ExternalSorter.scala:687)
      	at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$2.apply(ExternalSorter.scala:683)
      	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      	at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:683)
      	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:80)
      	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      	at org.apache.spark.scheduler.Task.run(Task.scala:88)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      ```
      
      To reproduce that with code (thanks andrewor14):
      ```scala
      bin/spark-shell --master local
        --conf spark.shuffle.memoryFraction=0.005
        --conf spark.shuffle.sort.bypassMergeThreshold=0
      
      sc.parallelize(1 to 2 * 1000 * 1000, 10)
        .map { i => (i, i) }.toDF("a", "b").groupBy("b").avg().count()
      ```
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #8635 from chenghao-intel/unsafe_spill.
      e0481113
    • Cheng Lian's avatar
      [SPARK-10301] [SPARK-10428] [SQL] Addresses comments of PR #8583 and #8509 for master · 49da38e5
      Cheng Lian authored
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #8670 from liancheng/spark-10301/address-pr-comments.
      49da38e5
    • Yash Datta's avatar
      [SPARK-7142] [SQL] Minor enhancement to BooleanSimplification Optimizer rule · f892d927
      Yash Datta authored
      Use these in the optimizer as well:
      
                  A and (not(A) or B) => A and B
                  not(A and B) => not(A) or not(B)
                  not(A or B) => not(A) and not(B)
      
      Author: Yash Datta <Yash.Datta@guavus.com>
      
      Closes #5700 from saucam/bool_simp.
      f892d927
    • Wenchen Fan's avatar
      [SPARK-10065] [SQL] avoid the extra copy when generate unsafe array · 4f1daa1e
      Wenchen Fan authored
      The reason for this extra copy is that we iterate the array twice: calculate elements data size and copy elements to array buffer.
      
      A simple solution is to follow `createCodeForStruct`, we can dynamically grow the buffer when needed and thus don't need to know the data size ahead.
      
      This PR also include some typo and style fixes, and did some minor refactor to make sure `input.primitive` is always variable name not code when generate unsafe code.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #8496 from cloud-fan/avoid-copy.
      4f1daa1e
    • Holden Karau's avatar
      [SPARK-10497] [BUILD] [TRIVIAL] Handle both locations for JIRAError with python-jira · 48817cc1
      Holden Karau authored
      Location of JIRAError has moved between old and new versions of python-jira package.
      Longer term it probably makes sense to pin to specific versions (as mentioned in https://issues.apache.org/jira/browse/SPARK-10498 ) but for now, making release tools works with both new and old versions of python-jira.
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #8661 from holdenk/SPARK-10497-release-utils-does-not-work-with-new-jira-python.
      48817cc1
    • Sean Paradiso's avatar
      [MINOR] [MLLIB] [ML] [DOC] fixed typo: label for negative result should be 0.0 (original: 1.0) · 1dc7548c
      Sean Paradiso authored
      Small typo in the example for `LabelledPoint` in the MLLib docs.
      
      Author: Sean Paradiso <seanparadiso@gmail.com>
      
      Closes #8680 from sparadiso/docs_mllib_smalltypo.
      1dc7548c
  3. Sep 09, 2015
    • Yanbo Liang's avatar
      [SPARK-9772] [PYSPARK] [ML] Add Python API for ml.feature.VectorSlicer · 56a0fe5c
      Yanbo Liang authored
      Add Python API for ml.feature.VectorSlicer.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8102 from yanboliang/SPARK-9772.
      56a0fe5c
    • Liang-Chi Hsieh's avatar
      [SPARK-9730] [SQL] Add Full Outer Join support for SortMergeJoin · 45de5187
      Liang-Chi Hsieh authored
      This PR is based on #8383 , thanks to viirya
      
      JIRA: https://issues.apache.org/jira/browse/SPARK-9730
      
      This patch adds the Full Outer Join support for SortMergeJoin. A new class SortMergeFullJoinScanner is added to scan rows from left and right iterators. FullOuterIterator is simply a wrapper of type RowIterator to consume joined rows from SortMergeFullJoinScanner.
      
      Closes #8383
      
      Author: Liang-Chi Hsieh <viirya@appier.com>
      Author: Davies Liu <davies@databricks.com>
      
      Closes #8579 from davies/smj_fullouter.
      45de5187
    • Wenchen Fan's avatar
      [SPARK-10461] [SQL] make sure `input.primitive` is always variable name not... · 71da1633
      Wenchen Fan authored
      [SPARK-10461] [SQL] make sure `input.primitive` is always variable name not code at `GenerateUnsafeProjection`
      
      When we generate unsafe code inside `createCodeForXXX`, we always assign the `input.primitive` to a temp variable in case `input.primitive` is expression code.
      
      This PR did some refactor to make sure `input.primitive` is always variable name, and some other typo and style fixes.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #8613 from cloud-fan/minor.
      71da1633
    • Jeff Zhang's avatar
      [SPARK-10481] [YARN] SPARK_PREPEND_CLASSES make spark-yarn related jar could n… · c0052d8d
      Jeff Zhang authored
      Throw a more readable exception. Please help review. Thanks
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #8649 from zjffdu/SPARK-10481.
      c0052d8d
    • lewuathe's avatar
      [SPARK-10117] [MLLIB] Implement SQL data source API for reading LIBSVM data · 2ddeb631
      lewuathe authored
      It is convenient to implement data source API for LIBSVM format to have a better integration with DataFrames and ML pipeline API.
      
      Two option is implemented.
      * `numFeatures`: Specify the dimension of features vector
      * `featuresType`: Specify the type of output vector. `sparse` is default.
      
      Author: lewuathe <lewuathe@me.com>
      
      Closes #8537 from Lewuathe/SPARK-10117 and squashes the following commits:
      
      986999d [lewuathe] Change unit test phrase
      11d513f [lewuathe] Fix some reviews
      21600a4 [lewuathe] Merge branch 'master' into SPARK-10117
      9ce63c7 [lewuathe] Rewrite service loader file
      1fdd2df [lewuathe] Merge branch 'SPARK-10117' of github.com:Lewuathe/spark into SPARK-10117
      ba3657c [lewuathe] Merge branch 'master' into SPARK-10117
      0ea1c1c [lewuathe] LibSVMRelation is registered into META-INF
      4f40891 [lewuathe] Improve test suites
      5ab62ab [lewuathe] Merge branch 'master' into SPARK-10117
      8660d0e [lewuathe] Fix Java unit test
      b56a948 [lewuathe] Merge branch 'master' into SPARK-10117
      2c12894 [lewuathe] Remove unnecessary tag
      7d693c2 [lewuathe] Resolv conflict
      62010af [lewuathe] Merge branch 'master' into SPARK-10117
      a97ee97 [lewuathe] Fix some points
      aef9564 [lewuathe] Fix
      70ee4dd [lewuathe] Add Java test
      3fd8dce [lewuathe] [SPARK-10117] Implement SQL data source API for reading LIBSVM data
      40d3027 [lewuathe] Add Java test
      7056d4a [lewuathe] Merge branch 'master' into SPARK-10117
      99accaa [lewuathe] [SPARK-10117] Implement SQL data source API for reading LIBSVM data
      2ddeb631
    • Luc Bourlier's avatar
      [SPARK-10227] fatal warnings with sbt on Scala 2.11 · c1bc4f43
      Luc Bourlier authored
      The bulk of the changes are on `transient` annotation on class parameter. Often the compiler doesn't generate a field for this parameters, so the the transient annotation would be unnecessary.
      But if the class parameter are used in methods, then fields are created. So it is safer to keep the annotations.
      
      The remainder are some potential bugs, and deprecated syntax.
      
      Author: Luc Bourlier <luc.bourlier@typesafe.com>
      
      Closes #8433 from skyluc/issue/sbt-2.11.
      c1bc4f43
    • Yuhao Yang's avatar
      [SPARK-10249] [ML] [DOC] Add Python Code Example to StopWordsRemover User Guide · 91a577d2
      Yuhao Yang authored
      jira: https://issues.apache.org/jira/browse/SPARK-10249
      
      update user guide since python support added.
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #8620 from hhbyyh/swPyDocExample.
      91a577d2
    • Holden Karau's avatar
      [SPARK-9654] [ML] [PYSPARK] Add IndexToString to PySpark · 2f6fd525
      Holden Karau authored
      Adds IndexToString to PySpark.
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #7976 from holdenk/SPARK-9654-add-string-indexer-inverse-in-pyspark.
      2f6fd525
  4. Sep 08, 2015
    • noelsmith's avatar
      [SPARK-10094] Pyspark ML Feature transformers marked as experimental · 0e2f2163
      noelsmith authored
      Modified class-level docstrings to mark all feature transformers in pyspark.ml as experimental.
      
      Author: noelsmith <mail@noelsmith.com>
      
      Closes #8623 from noel-smith/SPARK-10094-mark-pyspark-ml-trans-exp.
      0e2f2163
    • Davies Liu's avatar
      [SPARK-10373] [PYSPARK] move @since into pyspark from sql · 3a11e50e
      Davies Liu authored
      cc mengxr
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #8657 from davies/move_since.
      3a11e50e
    • Yanbo Liang's avatar
      [SPARK-10464] [MLLIB] Add WeibullGenerator for RandomDataGenerator · a1573489
      Yanbo Liang authored
      Add WeibullGenerator for RandomDataGenerator.
      #8611 need use WeibullGenerator to generate random data based on Weibull distribution.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #8622 from yanboliang/spark-10464.
      a1573489
    • Xiangrui Meng's avatar
      [SPARK-9834] [MLLIB] implement weighted least squares via normal equation · 52fe32f6
      Xiangrui Meng authored
      The goal of this PR is to have a weighted least squares implementation that takes the normal equation approach, and hence to be able to provide R-like summary statistics and support IRLS (used by GLMs). The tests match R's lm and glmnet.
      
      There are couple TODOs that can be addressed in future PRs:
      * consolidate summary statistics aggregators
      * move `dspr` to `BLAS`
      * etc
      
      It would be nice to have this merged first because it blocks couple other features.
      
      dbtsai
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #8588 from mengxr/SPARK-9834.
      52fe32f6
    • zsxwing's avatar
      [SPARK-10071] [STREAMING] Output a warning when writing QueueInputDStream and... · 820913f5
      zsxwing authored
      [SPARK-10071] [STREAMING] Output a warning when writing QueueInputDStream and throw a better exception when reading QueueInputDStream
      
      Output a warning when serializing QueueInputDStream rather than throwing an exception to allow unit tests use it. Moreover, this PR also throws an better exception when deserializing QueueInputDStream to make the user find out the problem easily. The previous exception is hard to understand: https://issues.apache.org/jira/browse/SPARK-8553
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #8624 from zsxwing/SPARK-10071 and squashes the following commits:
      
      847cfa8 [zsxwing] Output a warning when writing QueueInputDStream and throw a better exception when reading QueueInputDStream
      820913f5
    • Reynold Xin's avatar
      [RELEASE] Add more contributors & only show names in release notes. · ae74c3fa
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #8660 from rxin/contrib.
      ae74c3fa
    • Michael Armbrust's avatar
      [HOTFIX] Fix build break caused by #8494 · 2143d592
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #8659 from marmbrus/testBuildBreak.
      2143d592
    • Cheng Hao's avatar
      [SPARK-10327] [SQL] Cache Table is not working while subquery has alias in its project list · d637a666
      Cheng Hao authored
      ```scala
          import org.apache.spark.sql.hive.execution.HiveTableScan
          sql("select key, value, key + 1 from src").registerTempTable("abc")
          cacheTable("abc")
      
          val sparkPlan = sql(
            """select a.key, b.key, c.key from
              |abc a join abc b on a.key=b.key
              |join abc c on a.key=c.key""".stripMargin).queryExecution.sparkPlan
      
          assert(sparkPlan.collect { case e: InMemoryColumnarTableScan => e }.size === 3) // failed
          assert(sparkPlan.collect { case e: HiveTableScan => e }.size === 0) // failed
      ```
      
      The actual plan is:
      
      ```
      == Parsed Logical Plan ==
      'Project [unresolvedalias('a.key),unresolvedalias('b.key),unresolvedalias('c.key)]
       'Join Inner, Some(('a.key = 'c.key))
        'Join Inner, Some(('a.key = 'b.key))
         'UnresolvedRelation [abc], Some(a)
         'UnresolvedRelation [abc], Some(b)
        'UnresolvedRelation [abc], Some(c)
      
      == Analyzed Logical Plan ==
      key: int, key: int, key: int
      Project [key#14,key#61,key#66]
       Join Inner, Some((key#14 = key#66))
        Join Inner, Some((key#14 = key#61))
         Subquery a
          Subquery abc
           Project [key#14,value#15,(key#14 + 1) AS _c2#16]
            MetastoreRelation default, src, None
         Subquery b
          Subquery abc
           Project [key#61,value#62,(key#61 + 1) AS _c2#58]
            MetastoreRelation default, src, None
        Subquery c
         Subquery abc
          Project [key#66,value#67,(key#66 + 1) AS _c2#63]
           MetastoreRelation default, src, None
      
      == Optimized Logical Plan ==
      Project [key#14,key#61,key#66]
       Join Inner, Some((key#14 = key#66))
        Project [key#14,key#61]
         Join Inner, Some((key#14 = key#61))
          Project [key#14]
           InMemoryRelation [key#14,value#15,_c2#16], true, 10000, StorageLevel(true, true, false, true, 1), (Project [key#14,value#15,(key#14 + 1) AS _c2#16]), Some(abc)
          Project [key#61]
           MetastoreRelation default, src, None
        Project [key#66]
         MetastoreRelation default, src, None
      
      == Physical Plan ==
      TungstenProject [key#14,key#61,key#66]
       BroadcastHashJoin [key#14], [key#66], BuildRight
        TungstenProject [key#14,key#61]
         BroadcastHashJoin [key#14], [key#61], BuildRight
          ConvertToUnsafe
           InMemoryColumnarTableScan [key#14], (InMemoryRelation [key#14,value#15,_c2#16], true, 10000, StorageLevel(true, true, false, true, 1), (Project [key#14,value#15,(key#14 + 1) AS _c2#16]), Some(abc))
          ConvertToUnsafe
           HiveTableScan [key#61], (MetastoreRelation default, src, None)
        ConvertToUnsafe
         HiveTableScan [key#66], (MetastoreRelation default, src, None)
      ```
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #8494 from chenghao-intel/weird_cache.
      d637a666
Loading