Skip to content
Snippets Groups Projects
  1. Aug 03, 2014
    • Michael Armbrust's avatar
      [SPARK-2784][SQL] Deprecate hql() method in favor of a config option, 'spark.sql.dialect' · 236dfac6
      Michael Armbrust authored
      Many users have reported being confused by the distinction between the `sql` and `hql` methods.  Specifically, many users think that `sql(...)` cannot be used to read hive tables.  In this PR I introduce a new configuration option `spark.sql.dialect` that picks which dialect with be used for parsing.  For SQLContext this must be set to `sql`.  In `HiveContext` it defaults to `hiveql` but can also be set to `sql`.
      
      The `hql` and `hiveql` methods continue to act the same but are now marked as deprecated.
      
      **This is a possibly breaking change for some users unless they set the dialect manually, though this is unlikely.**
      
      For example: `hiveContex.sql("SELECT 1")` will now throw a parsing exception by default.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1746 from marmbrus/sqlLanguageConf and squashes the following commits:
      
      ad375cc [Michael Armbrust] Merge remote-tracking branch 'apache/master' into sqlLanguageConf
      20c43f8 [Michael Armbrust] override function instead of just setting the value
      7e4ae93 [Michael Armbrust] Deprecate hql() method in favor of a config option, 'spark.sql.dialect'
      236dfac6
    • Joseph K. Bradley's avatar
      [SPARK-2197] [mllib] Java DecisionTree bug fix and easy-of-use · 2998e38a
      Joseph K. Bradley authored
      Bug fix: Before, when an RDD was created in Java and passed to DecisionTree.train(), the fake class tag caused problems.
      * Fix: DecisionTree: Used new RDD.retag() method to allow passing RDDs from Java.
      
      Other improvements to Decision Trees for easy-of-use with Java:
      * impurity classes: Added instance() methods to help with Java interface.
      * Strategy: Added Java-friendly constructor
      --> Note: I removed quantileCalculationStrategy from the Java-friendly constructor since (a) it is a special class and (b) there is only 1 option currently.  I suspect we will redo the API before the other options are included.
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #1740 from jkbradley/dt-java-new and squashes the following commits:
      
      0805dc6 [Joseph K. Bradley] Changed Strategy to use JavaConverters instead of JavaConversions
      519b1b7 [Joseph K. Bradley] * Organized imports in JavaDecisionTreeSuite.java * Using JavaConverters instead of JavaConversions in DecisionTreeSuite.scala
      f7b5ca1 [Joseph K. Bradley] Improvements to make it easier to run DecisionTree from Java. * DecisionTree: Used new RDD.retag() method to allow passing RDDs from Java. * impurity classes: Added instance() methods to help with Java interface. * Strategy: Added Java-friendly constructor ** Note: I removed quantileCalculationStrategy from the Java-friendly constructor since (a) it is a special class and (b) there is only 1 option currently.  I suspect we will redo the API before the other options are included.
      d78ada6 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-java
      320853f [Joseph K. Bradley] Added JavaDecisionTreeSuite, partly written
      13a585e [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-java
      f1a8283 [Joseph K. Bradley] Added old JavaDecisionTreeSuite, to be updated later
      225822f [Joseph K. Bradley] Bug: In DecisionTree, the method sequentialBinSearchForOrderedCategoricalFeatureInClassification() indexed bins from 0 to (math.pow(2, featureCategories.toInt - 1) - 1). This upper bound is the bound for unordered categorical features, not ordered ones. The upper bound should be the arity (i.e., max value) of the feature.
      2998e38a
    • Allan Douglas R. de Oliveira's avatar
      SPARK-2246: Add user-data option to EC2 scripts · a0bcbc15
      Allan Douglas R. de Oliveira authored
      Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>
      
      Closes #1186 from douglaz/spark_ec2_user_data and squashes the following commits:
      
      94a36f9 [Allan Douglas R. de Oliveira] Added user data option to EC2 script
      a0bcbc15
    • Stephen Boesch's avatar
      SPARK-2712 - Add a small note to maven doc that mvn package must happen ... · f8cd143b
      Stephen Boesch authored
      Per request by Reynold adding small note about proper sequencing of build then test.
      
      Author: Stephen Boesch <javadba@gmail.com>
      
      Closes #1615 from javadba/docs and squashes the following commits:
      
      6c3183e [Stephen Boesch] Moved updated testing blurb per PWendell
      5764757 [Stephen Boesch] SPARK-2712 - Add a small note to maven doc that mvn package must happen before test
      f8cd143b
    • Andrew Or's avatar
      [Minor] Fixes on top of #1679 · 3dc55fdf
      Andrew Or authored
      Minor fixes on top of #1679.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1736 from andrewor14/amend-#1679 and squashes the following commits:
      
      3b46f5e [Andrew Or] Minor fixes
      3dc55fdf
  2. Aug 02, 2014
    • Sean Owen's avatar
      SPARK-2414 [BUILD] Add LICENSE entry for jquery · 9cf429aa
      Sean Owen authored
      The JIRA concerned removing jquery, and this does not remove jquery. While it is distributed by Spark it should have an accompanying line in LICENSE, very technically, as per http://www.apache.org/dev/licensing-howto.html
      
      Author: Sean Owen <srowen@gmail.com>
      
      Closes #1748 from srowen/SPARK-2414 and squashes the following commits:
      
      2fdb03c [Sean Owen] Add LICENSE entry for jquery
      9cf429aa
    • Sean Owen's avatar
      SPARK-2602 [BUILD] Tests steal focus under Java 6 · 33f167d7
      Sean Owen authored
      As per https://issues.apache.org/jira/browse/SPARK-2602 , this may be resolved for Java 6 with the java.awt.headless system property, which never hurt anyone running a command line app. I tested it and seemed to get rid of focus stealing.
      
      Author: Sean Owen <srowen@gmail.com>
      
      Closes #1747 from srowen/SPARK-2602 and squashes the following commits:
      
      b141018 [Sean Owen] Set java.awt.headless during tests
      33f167d7
    • Michael Armbrust's avatar
      [SPARK-2739][SQL] Rename registerAsTable to registerTempTable · 1a804373
      Michael Armbrust authored
      There have been user complaints that the difference between `registerAsTable` and `saveAsTable` is too subtle.  This PR addresses this by renaming `registerAsTable` to `registerTempTable`, which more clearly reflects what is happening.  `registerAsTable` remains, but will cause a deprecation warning.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1743 from marmbrus/registerTempTable and squashes the following commits:
      
      d031348 [Michael Armbrust] Merge remote-tracking branch 'apache/master' into registerTempTable
      4dff086 [Michael Armbrust] Fix .java files too
      89a2f12 [Michael Armbrust] Merge remote-tracking branch 'apache/master' into registerTempTable
      0b7b71e [Michael Armbrust] Rename registerAsTable to registerTempTable
      1a804373
    • Yin Huai's avatar
      [SPARK-2797] [SQL] SchemaRDDs don't support unpersist() · d210022e
      Yin Huai authored
      The cause is explained in https://issues.apache.org/jira/browse/SPARK-2797.
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1745 from yhuai/SPARK-2797 and squashes the following commits:
      
      7b1627d [Yin Huai] The unpersist method of the Scala RDD cannot be called without the input parameter (blocking) from PySpark.
      d210022e
    • Cheng Lian's avatar
      [SPARK-2729][SQL] Added test case for SPARK-2729 · 866cf1f8
      Cheng Lian authored
      This is a follow up of #1636.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1738 from liancheng/test-for-spark-2729 and squashes the following commits:
      
      b13692a [Cheng Lian] Added test case for SPARK-2729
      866cf1f8
    • Michael Armbrust's avatar
      [SPARK-2785][SQL] Remove assertions that throw when users try unsupported Hive commands. · 198df11f
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1742 from marmbrus/asserts and squashes the following commits:
      
      5182d54 [Michael Armbrust] Remove assertions that throw when users try unsupported Hive commands.
      198df11f
    • Michael Armbrust's avatar
      [SPARK-2097][SQL] UDF Support · 158ad0bb
      Michael Armbrust authored
      This patch adds the ability to register lambda functions written in Python, Java or Scala as UDFs for use in SQL or HiveQL.
      
      Scala:
      ```scala
      registerFunction("strLenScala", (_: String).length)
      sql("SELECT strLenScala('test')")
      ```
      Python:
      ```python
      sqlCtx.registerFunction("strLenPython", lambda x: len(x), IntegerType())
      sqlCtx.sql("SELECT strLenPython('test')")
      ```
      Java:
      ```java
      sqlContext.registerFunction("stringLengthJava", new UDF1<String, Integer>() {
        Override
        public Integer call(String str) throws Exception {
          return str.length();
        }
      }, DataType.IntegerType);
      
      sqlContext.sql("SELECT stringLengthJava('test')");
      ```
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1063 from marmbrus/udfs and squashes the following commits:
      
      9eda0fe [Michael Armbrust] newline
      747c05e [Michael Armbrust] Add some scala UDF tests.
      d92727d [Michael Armbrust] Merge remote-tracking branch 'apache/master' into udfs
      005d684 [Michael Armbrust] Fix naming and formatting.
      d14dac8 [Michael Armbrust] Fix last line of autogened java files.
      8135c48 [Michael Armbrust] Move UDF unit tests to pyspark.
      40b0ffd [Michael Armbrust] Merge remote-tracking branch 'apache/master' into udfs
      6a36890 [Michael Armbrust] Switch logging so that SQLContext can be serializable.
      7a83101 [Michael Armbrust] Drop toString
      795fd15 [Michael Armbrust] Try to avoid capturing SQLContext.
      e54fb45 [Michael Armbrust] Docs and tests.
      437cbe3 [Michael Armbrust] Update use of dataTypes, fix some python tests, address review comments.
      01517d6 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into udfs
      8e6c932 [Michael Armbrust] WIP
      3f96a52 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into udfs
      6237c8d [Michael Armbrust] WIP
      2766f0b [Michael Armbrust] Move udfs support to SQL from hive. Add support for Java UDFs.
      0f7d50c [Michael Armbrust] Draft of native Spark SQL UDFs for Scala and Python.
      158ad0bb
    • GuoQiang Li's avatar
      SPARK-2804: Remove scalalogging-slf4j dependency · 4c477117
      GuoQiang Li authored
      This also Closes #1701.
      
      Author: GuoQiang Li <witgo@qq.com>
      
      Closes #1208 from witgo/SPARK-1470 and squashes the following commits:
      
      422646b [GuoQiang Li] Remove scalalogging-slf4j dependency
      4c477117
    • Chris Fregly's avatar
      [SPARK-1981] Add AWS Kinesis streaming support · 91f9504e
      Chris Fregly authored
      Author: Chris Fregly <chris@fregly.com>
      
      Closes #1434 from cfregly/master and squashes the following commits:
      
      4774581 [Chris Fregly] updated docs, renamed retry to retryRandom to be more clear, removed retries around store() method
      0393795 [Chris Fregly] moved Kinesis examples out of examples/ and back into extras/kinesis-asl
      691a6be [Chris Fregly] fixed tests and formatting, fixed a bug with JavaKinesisWordCount during union of streams
      0e1c67b [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      74e5c7c [Chris Fregly] updated per TD's feedback.  simplified examples, updated docs
      e33cbeb [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      bf614e9 [Chris Fregly] per matei's feedback:  moved the kinesis examples into the examples/ dir
      d17ca6d [Chris Fregly] per TD's feedback:  updated docs, simplified the KinesisUtils api
      912640c [Chris Fregly] changed the foundKinesis class to be a publically-avail class
      db3eefd [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      21de67f [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      6c39561 [Chris Fregly] parameterized the versions of the aws java sdk and kinesis client
      338997e [Chris Fregly] improve build docs for kinesis
      828f8ae [Chris Fregly] more cleanup
      e7c8978 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      cd68c0d [Chris Fregly] fixed typos and backward compatibility
      d18e680 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      b3b0ff1 [Chris Fregly] [SPARK-1981] Add AWS Kinesis streaming support
      91f9504e
    • Yin Huai's avatar
      [SQL] Set outputPartitioning of BroadcastHashJoin correctly. · 67bd8e3c
      Yin Huai authored
      I think we will not generate the plan triggering this bug at this moment. But, let me explain it...
      
      Right now, we are using `left.outputPartitioning` as the `outputPartitioning` of a `BroadcastHashJoin`. We may have a wrong physical plan for cases like...
      ```sql
      SELECT l.key, count(*)
      FROM (SELECT key, count(*) as cnt
            FROM src
            GROUP BY key) l // This is buildPlan
      JOIN r // This is the streamedPlan
      ON (l.cnt = r.value)
      GROUP BY l.key
      ```
      Let's say we have a `BroadcastHashJoin` on `l` and `r`. For this case, we will pick `l`'s `outputPartitioning` for the `outputPartitioning`of the `BroadcastHashJoin` on `l` and `r`. Also, because the last `GROUP BY` is using `l.key` as the key, we will not introduce an `Exchange` for this aggregation. However, `r`'s outputPartitioning may not match the required distribution of the last `GROUP BY` and we fail to group data correctly.
      
      JIRA is being reindexed. I will create a JIRA ticket once it is back online.
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1735 from yhuai/BroadcastHashJoin and squashes the following commits:
      
      96d9cb3 [Yin Huai] Set outputPartitioning correctly.
      67bd8e3c
    • Joseph K. Bradley's avatar
      [SPARK-2478] [mllib] DecisionTree Python API · 3f67382e
      Joseph K. Bradley authored
      Added experimental Python API for Decision Trees.
      
      API:
      * class DecisionTreeModel
      ** predict() for single examples and RDDs, taking both feature vectors and LabeledPoints
      ** numNodes()
      ** depth()
      ** __str__()
      * class DecisionTree
      ** trainClassifier()
      ** trainRegressor()
      ** train()
      
      Examples and testing:
      * Added example testing classification and regression with batch prediction: examples/src/main/python/mllib/tree.py
      * Have also tested example usage in doc of python/pyspark/mllib/tree.py which tests single-example prediction with dense and sparse vectors
      
      Also: Small bug fix in python/pyspark/mllib/_common.py: In _linear_predictor_typecheck, changed check for RDD to use isinstance() instead of type() in order to catch RDD subclasses.
      
      CC mengxr manishamde
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #1727 from jkbradley/decisiontree-python-new and squashes the following commits:
      
      3744488 [Joseph K. Bradley] Renamed test tree.py to decision_tree_runner.py Small updates based on github review.
      6b86a9d [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      affceb9 [Joseph K. Bradley] * Fixed bug in doc tests in pyspark/mllib/util.py caused by change in loadLibSVMFile behavior.  (It used to threshold labels at 0 to make them 0/1, but it now leaves them as they are.) * Fixed small bug in loadLibSVMFile: If a data file had no features, then loadLibSVMFile would create a single all-zero feature.
      67a29bc [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      cf46ad7 [Joseph K. Bradley] Python DecisionTreeModel * predict(empty RDD) returns an empty RDD instead of an error. * Removed support for calling predict() on LabeledPoint and RDD[LabeledPoint] * predict() does not cache serialized RDD any more.
      aa29873 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      bf21be4 [Joseph K. Bradley] removed old run() func from DecisionTree
      fa10ea7 [Joseph K. Bradley] Small style update
      7968692 [Joseph K. Bradley] small braces typo fix
      e34c263 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      4801b40 [Joseph K. Bradley] Small style update to DecisionTreeSuite
      db0eab2 [Joseph K. Bradley] Merge branch 'decisiontree-bugfix2' into decisiontree-python-new
      6873fa9 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      225822f [Joseph K. Bradley] Bug: In DecisionTree, the method sequentialBinSearchForOrderedCategoricalFeatureInClassification() indexed bins from 0 to (math.pow(2, featureCategories.toInt - 1) - 1). This upper bound is the bound for unordered categorical features, not ordered ones. The upper bound should be the arity (i.e., max value) of the feature.
      93953f1 [Joseph K. Bradley] Likely done with Python API.
      6df89a9 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      4562c08 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      665ba78 [Joseph K. Bradley] Small updates towards Python DecisionTree API
      188cb0d [Joseph K. Bradley] Merge branch 'decisiontree-bugfix' into decisiontree-python-new
      6622247 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      b8fac57 [Joseph K. Bradley] Finished Python DecisionTree API and example but need to test a bit more.
      2b20c61 [Joseph K. Bradley] Small doc and style updates
      1b29c13 [Joseph K. Bradley] Merge branch 'decisiontree-bugfix' into decisiontree-python-new
      584449a [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      dab0b67 [Joseph K. Bradley] Added documentation for DecisionTree internals
      8bb8aa0 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-bugfix
      978cfcf [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-bugfix
      6eed482 [Joseph K. Bradley] In DecisionTree: Changed from using procedural syntax for functions returning Unit to explicitly writing Unit return type.
      376dca2 [Joseph K. Bradley] Updated meaning of maxDepth by 1 to fit scikit-learn and rpart. * In code, replaced usages of maxDepth <-- maxDepth + 1 * In params, replace settings of maxDepth <-- maxDepth - 1
      e06e423 [Joseph K. Bradley] Merge branch 'decisiontree-bugfix' into decisiontree-python-new
      bab3f19 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      59750f8 [Joseph K. Bradley] * Updated Strategy to check numClassesForClassification only if algo=Classification. * Updates based on comments: ** DecisionTreeRunner *** Made dataFormat arg default to libsvm ** Small cleanups ** tree.Node: Made recursive helper methods private, and renamed them.
      52e17c5 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-bugfix
      f5a036c [Joseph K. Bradley] Merge branch 'decisiontree-bugfix' into decisiontree-python-new
      da50db7 [Joseph K. Bradley] Added one more test to DecisionTreeSuite: stump with 2 continuous variables for binary classification.  Caused problems in past, but fixed now.
      8e227ea [Joseph K. Bradley] Changed Strategy so it only requires numClassesForClassification >= 2 for classification
      cd1d933 [Joseph K. Bradley] Merge branch 'decisiontree-bugfix' into decisiontree-python-new
      8ea8750 [Joseph K. Bradley] Bug fix: Off-by-1 when finding thresholds for splits for continuous features.
      8a758db [Joseph K. Bradley] Merge branch 'decisiontree-bugfix' into decisiontree-python-new
      5fe44ed [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-python-new
      2283df8 [Joseph K. Bradley] 2 bug fixes.
      73fbea2 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into decisiontree-bugfix
      5f920a1 [Joseph K. Bradley] Demonstration of bug before submitting fix: Updated DecisionTreeSuite so that 3 tests fail.  Will describe bug in next commit.
      f825352 [Joseph K. Bradley] Wrote Python API and example for DecisionTree.  Also added toString, depth, and numNodes methods to DecisionTreeModel.
      3f67382e
    • Andrew Or's avatar
      [HOTFIX] Do not throw NPE if spark.test.home is not set · e09e18b3
      Andrew Or authored
      `spark.test.home` was introduced in #1734. This is fine for SBT but is failing maven tests. Either way it shouldn't throw an NPE.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1739 from andrewor14/fix-spark-test-home and squashes the following commits:
      
      ce2624c [Andrew Or] Do not throw NPE if spark.test.home is not set
      e09e18b3
    • Patrick Wendell's avatar
      MAINTENANCE: Automated closing of pull requests. · 87738bfa
      Patrick Wendell authored
      This commit exists to close the following pull requests on Github:
      
      Closes #706 (close requested by 'pwendell')
      Closes #453 (close requested by 'pwendell')
      Closes #557 (close requested by 'tdas')
      Closes #495 (close requested by 'tdas')
      Closes #1232 (close requested by 'pwendell')
      Closes #82 (close requested by 'pwendell')
      Closes #600 (close requested by 'pwendell')
      Closes #473 (close requested by 'pwendell')
      Closes #351 (close requested by 'pwendell')
      87738bfa
    • Patrick Wendell's avatar
      HOTFIX: Fix concurrency issue in FlumePollingStreamSuite. · 44460ba5
      Patrick Wendell authored
      This has been failing on master. One possible cause is that the port
      gets contended if multiple test runs happen concurrently and they
      hit this test at the same time. Since this test takes a long time
      (60 seconds) that's very plausible. This patch randomizes the port
      used in this test to avoid contention.
      44460ba5
    • Patrick Wendell's avatar
      HOTFIX: Fixing test error in maven for flume-sink. · 25cad6ad
      Patrick Wendell authored
      We needed to add an explicit dependency on scalatest since this
      module will not get it from spark core like others do.
      25cad6ad
    • Anand Avati's avatar
      [SPARK-1812] sql/catalyst - Provide explicit type information · 08c095b6
      Anand Avati authored
      For Scala 2.11 compatibility.
      
      Without the explicit type specification, withNullability
      return type is inferred to be Attribute, and thus calling
      at() on the returned object fails in these tests:
      
      [ERROR] /Users/avati/work/spark/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvaluationSuite.scala:370: value at is not a
      [ERROR]     val c4_notNull = 'a.boolean.notNull.at(3)
      [ERROR]                                         ^
      [ERROR] /Users/avati/work/spark/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvaluationSuite.scala:371: value at is not a
      [ERROR]     val c5_notNull = 'a.boolean.notNull.at(4)
      [ERROR]                                         ^
      [ERROR] /Users/avati/work/spark/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvaluationSuite.scala:372: value at is not a
      [ERROR]     val c6_notNull = 'a.boolean.notNull.at(5)
      [ERROR]                                         ^
      [ERROR] /Users/avati/work/spark/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvaluationSuite.scala:558: value at is not a
      [ERROR]     val s_notNull = 'a.string.notNull.at(0)
      
      Signed-off-by: Anand Avati <avatiredhat.com>
      
      Author: Anand Avati <avati@redhat.com>
      
      Closes #1709 from avati/SPARK-1812-notnull and squashes the following commits:
      
      0470eb3 [Anand Avati] SPARK-1812: sql/catalyst - Provide explicit type information
      08c095b6
    • Andrew Or's avatar
      [SPARK-2454] Do not ship spark home to Workers · 148af608
      Andrew Or authored
      When standalone Workers launch executors, they inherit the Spark home set by the driver. This means if the worker machines do not share the same directory structure as the driver node, the Workers will attempt to run scripts (e.g. bin/compute-classpath.sh) that do not exist locally and fail. This is a common scenario if the driver is launched from outside of the cluster.
      
      The solution is to simply not pass the driver's Spark home to the Workers. This PR further makes an attempt to avoid overloading the usages of `spark.home`, which is now only used for setting executor Spark home on Mesos and in python.
      
      This is based on top of #1392 and originally reported by YanTangZhai. Tested on standalone cluster.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1734 from andrewor14/spark-home-reprise and squashes the following commits:
      
      f71f391 [Andrew Or] Revert changes in python
      1c2532c [Andrew Or] Merge branch 'master' of github.com:apache/spark into spark-home-reprise
      188fc5d [Andrew Or] Avoid using spark.home where possible
      09272b7 [Andrew Or] Always use Worker's working directory as spark home
      148af608
    • Andrew Or's avatar
      [SPARK-2316] Avoid O(blocks) operations in listeners · d934801d
      Andrew Or authored
      The existing code in `StorageUtils` is not the most efficient. Every time we want to update an `RDDInfo` we end up iterating through all blocks on all block managers just to discard most of them. The symptoms manifest themselves in the bountiful UI bugs observed in the wild. Many of these bugs are caused by the slow consumption of events in `LiveListenerBus`, which frequently leads to the event queue overflowing and `SparkListenerEvent`s being dropped on the floor. The changes made in this PR avoid this by first filtering out only the blocks relevant to us before computing storage information from them.
      
      It's worth a mention that this corner of the Spark code is also not very well-tested at all. The bulk of the changes in this PR (more than 60%) is actually test cases for the various logic in `StorageUtils.scala` as well as `StorageTab.scala`. These will eventually be extended to cover the various listeners that constitute the `SparkUI`.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1679 from andrewor14/fix-drop-events and squashes the following commits:
      
      f80c1fa [Andrew Or] Rewrite fold and reduceOption as sum
      e132d69 [Andrew Or] Merge branch 'master' of github.com:apache/spark into fix-drop-events
      14fa1c3 [Andrew Or] Simplify some code + update a few comments
      a91be46 [Andrew Or] Make ExecutorsPage blazingly fast
      bf6f09b [Andrew Or] Minor changes
      8981de1 [Andrew Or] Merge branch 'master' of github.com:apache/spark into fix-drop-events
      af19bc0 [Andrew Or] *UsedByRDD -> *UsedByRdd (minor)
      6970bc8 [Andrew Or] Add extensive tests for StorageListener and the new code in StorageUtils
      e080b9e [Andrew Or] Reduce run time of StorageUtils.updateRddInfo to near constant
      2c3ef6a [Andrew Or] Actually filter out only the relevant RDDs
      6fef86a [Andrew Or] Add extensive tests for new code in StorageStatus
      b66b6b0 [Andrew Or] Use more efficient underlying data structures for blocks
      6a7b7c0 [Andrew Or] Avoid chained operations on TraversableLike
      a9ec384 [Andrew Or] Merge branch 'master' of github.com:apache/spark into fix-drop-events
      b12fcd7 [Andrew Or] Fix tests + simplify sc.getRDDStorageInfo
      da8e322 [Andrew Or] Merge branch 'master' of github.com:apache/spark into fix-drop-events
      8e91921 [Andrew Or] Iterate through a filtered set of blocks when updating RDDInfo
      7b2c4aa [Andrew Or] Rewrite blockLocationsFromStorageStatus + clean up method signatures
      41fa50d [Andrew Or] Add a legacy constructor for StorageStatus
      53af15d [Andrew Or] Refactor StorageStatus + add a bunch of tests
      d934801d
    • Patrick Wendell's avatar
    • GuoQiang Li's avatar
      [SPARK-1470][SPARK-1842] Use the scala-logging wrapper instead of the directly sfl4j api · adc83032
      GuoQiang Li authored
      Author: GuoQiang Li <witgo@qq.com>
      
      Closes #1369 from witgo/SPARK-1470_new and squashes the following commits:
      
      66a1641 [GuoQiang Li] IncompatibleResultTypeProblem
      73a89ba [GuoQiang Li] Use the scala-logging wrapper instead of the directly sfl4j api.
      adc83032
    • Jeremy Freeman's avatar
      StatCounter on NumPy arrays [PYSPARK][SPARK-2012] · 4bc3bb29
      Jeremy Freeman authored
      These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here  (https://issues.apache.org/jira/browse/SPARK-2012).
      
      If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
      
      New unit tests added, along with a check for NumPy in the tests.
      
      Author: Jeremy Freeman <the.freeman.lab@gmail.com>
      
      Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
      
      fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
      7f0e397 [Jeremy Freeman] Refactored check for numpy
      8e764dd [Jeremy Freeman] Explicit numpy imports
      875414c [Jeremy Freeman] Fixed indents
      1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
      176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
      4bc3bb29
    • Burak's avatar
      [SPARK-2801][MLlib]: DistributionGenerator renamed to RandomDataGenerator.... · fda47598
      Burak authored
      [SPARK-2801][MLlib]: DistributionGenerator renamed to RandomDataGenerator. RandomRDD is now of generic type
      
      The RandomRDDGenerators used to only output RDD[Double].
      Now RandomRDDGenerators.randomRDD can be used to generate a random RDD[T] via a class that extends RandomDataGenerator, by supplying a type T and overriding the nextValue() function as they wish.
      
      Author: Burak <brkyvz@gmail.com>
      
      Closes #1732 from brkyvz/SPARK-2801 and squashes the following commits:
      
      c94a694 [Burak] [SPARK-2801][MLlib] Missing ClassTags added
      22d96fe [Burak] [SPARK-2801][MLlib]: DistributionGenerator renamed to RandomDataGenerator, generic types added for RandomRDD instead of Double
      fda47598
  3. Aug 01, 2014
    • Tor Myklebust's avatar
      [SPARK-1580][MLLIB] Estimate ALS communication and computation costs. · e25ec061
      Tor Myklebust authored
      Continue the work from #493.
      
      Closes #493 and Closes #593
      
      Author: Tor Myklebust <tmyklebu@gmail.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1731 from mengxr/tmyklebu-alscost and squashes the following commits:
      
      9b56a8b [Xiangrui Meng] updated API and added a simple test
      68a3229 [Xiangrui Meng] merge master
      217bd1d [Tor Myklebust] Documentation and choleskies -> subproblems.
      8cbb718 [Tor Myklebust] Braces get spaces.
      0455cd4 [Tor Myklebust] Parens for collectAsMap.
      2b2febe [Tor Myklebust] Use `makeLinkRDDs` when estimating costs.
      2ab7a5d [Tor Myklebust] Reindent estimateCost's declaration and make it return Seqs.
      8b21e6d [Tor Myklebust] Fix overlong lines.
      8cbebf1 [Tor Myklebust] Rename and clean up the return format of cost estimator.
      6615ed5 [Tor Myklebust] It's more useful to give per-partition estimates.  Do that.
      5530678 [Tor Myklebust] Merge branch 'master' of https://github.com/apache/spark into alscost
      6c31324 [Tor Myklebust] Make it actually build...
      a1184d1 [Tor Myklebust] Mark ALS.evaluatePartitioner DeveloperApi.
      657a71b [Tor Myklebust] Simple-minded estimates of computation and communication costs in ALS.
      dcf583a [Tor Myklebust] Remove the partitioner member variable; instead, thread that needle everywhere it needs to go.
      23d6f91 [Tor Myklebust] Stop making the partitioner configurable.
      495784f [Tor Myklebust] Merge branch 'master' of https://github.com/apache/spark
      674933a [Tor Myklebust] Fix style.
      40edc23 [Tor Myklebust] Fix missing space.
      f841345 [Tor Myklebust] Fix daft bug creating 'pairs', also for -> foreach.
      5ec9e6c [Tor Myklebust] Clean a couple of things up using 'map'.
      36a0f43 [Tor Myklebust] Make the partitioner private.
      d872b09 [Tor Myklebust] Add negative id ALS test.
      df27697 [Tor Myklebust] Support custom partitioners.  Currently we use the same partitioner for users and products.
      c90b6d8 [Tor Myklebust] Scramble user and product ids before bucketing.
      c774d7d [Tor Myklebust] Make the partitioner a member variable and use it instead of modding directly.
      e25ec061
    • Michael Giannakopoulos's avatar
      [SPARK-2550][MLLIB][APACHE SPARK] Support regularization and intercept in pyspark's linear methods. · c2811892
      Michael Giannakopoulos authored
      Related to issue: [SPARK-2550](https://issues.apache.org/jira/browse/SPARK-2550?jql=project%20%3D%20SPARK%20AND%20resolution%20%3D%20Unresolved%20AND%20priority%20%3D%20Major%20ORDER%20BY%20key%20DESC).
      
      Author: Michael Giannakopoulos <miccagiann@gmail.com>
      
      Closes #1624 from miccagiann/new-branch and squashes the following commits:
      
      c02e5f5 [Michael Giannakopoulos] Merge cleanly with upstream/master.
      8dcb888 [Michael Giannakopoulos] Putting the if/else if statements in brackets.
      fed8eaa [Michael Giannakopoulos] Adding a space in the message related to the IllegalArgumentException.
      44e6ff0 [Michael Giannakopoulos] Adding a blank line before python class LinearRegressionWithSGD.
      8eba9c5 [Michael Giannakopoulos] Change function signatures. Exception is thrown from the scala component and not from the python one.
      638be47 [Michael Giannakopoulos] Modified code to comply with code standards.
      ec50ee9 [Michael Giannakopoulos] Shorten the if-elif-else statement in regression.py file
      b962744 [Michael Giannakopoulos] Replaced the enum classes, with strings-keywords for defining the values of 'regType' parameter.
      78853ec [Michael Giannakopoulos] Providing intercept and regualizer functionallity for linear methods in only one function.
      3ac8874 [Michael Giannakopoulos] Added support for regularizer and intercection parameters for linear regression method.
      c2811892
    • Jeremy Freeman's avatar
      Streaming mllib [SPARK-2438][MLLIB] · f6a18993
      Jeremy Freeman authored
      This PR implements a streaming linear regression analysis, in which a linear regression model is trained online as new data arrive. The design is based on discussions with tdas and mengxr, in which we determined how to add this functionality in a general way, with minimal changes to existing libraries.
      
      __Summary of additions:__
      
      _StreamingLinearAlgorithm_
      - An abstract class for fitting generalized linear models online to streaming data, including training on (and updating) a model, and making predictions.
      
      _StreamingLinearRegressionWithSGD_
      - Class and companion object for running streaming linear regression
      
      _StreamingLinearRegressionTestSuite_
      - Unit tests
      
      _StreamingLinearRegression_
      - Example use case: fitting a model online to data from one stream, and making predictions on other data
      
      __Notes__
      - If this looks good, I can use the StreamingLinearAlgorithm class to easily implement other analyses that follow the same logic (Ridge, Lasso, Logistic, SVM).
      
      Author: Jeremy Freeman <the.freeman.lab@gmail.com>
      Author: freeman <the.freeman.lab@gmail.com>
      
      Closes #1361 from freeman-lab/streaming-mllib and squashes the following commits:
      
      775ea29 [Jeremy Freeman] Throw error if user doesn't initialize weights
      4086fee [Jeremy Freeman] Fixed current weight formatting
      8b95b27 [Jeremy Freeman] Restored broadcasting
      29f27ec [Jeremy Freeman] Formatting
      8711c41 [Jeremy Freeman] Used return to avoid indentation
      777b596 [Jeremy Freeman] Restored treeAggregate
      74cf440 [Jeremy Freeman] Removed static methods
      d28cf9a [Jeremy Freeman] Added usage notes
      c3326e7 [Jeremy Freeman] Improved documentation
      9541a41 [Jeremy Freeman] Merge remote-tracking branch 'upstream/master' into streaming-mllib
      66eba5e [Jeremy Freeman] Fixed line lengths
      2fe0720 [Jeremy Freeman] Minor cleanup
      7d51378 [Jeremy Freeman] Moved streaming loader to MLUtils
      b9b69f6 [Jeremy Freeman] Added setter methods
      c3f8b5a [Jeremy Freeman] Modified logging
      00aafdc [Jeremy Freeman] Add modifiers
      14b801e [Jeremy Freeman] Name changes
      c7d38a3 [Jeremy Freeman] Move check for empty data to GradientDescent
      4b0a5d3 [Jeremy Freeman] Cleaned up tests
      74188d6 [Jeremy Freeman] Eliminate dependency on commons
      50dd237 [Jeremy Freeman] Removed experimental tag
      6bfe1e6 [Jeremy Freeman] Fixed imports
      a2a63ad [freeman] Makes convergence test more robust
      86220bc [freeman] Streaming linear regression unit tests
      fb4683a [freeman] Minor changes for scalastyle consistency
      fd31e03 [freeman] Changed logging behavior
      453974e [freeman] Fixed indentation
      c4b1143 [freeman] Streaming linear regression
      604f4d7 [freeman] Expanded private class to include mllib
      d99aa85 [freeman] Helper methods for streaming MLlib apps
      0898add [freeman] Added dependency on streaming
      f6a18993
    • Josh Rosen's avatar
      [SPARK-2764] Simplify daemon.py process structure · e8e0fd69
      Josh Rosen authored
      Curently, daemon.py forks a pool of numProcessors subprocesses, and those processes fork themselves again to create the actual Python worker processes that handle data.
      
      I think that this extra layer of indirection is unnecessary and adds a lot of complexity.  This commit attempts to remove this middle layer of subprocesses by launching the workers directly from daemon.py.
      
      See https://github.com/mesos/spark/pull/563 for the original PR that added daemon.py, where I raise some issues with the current design.
      
      Author: Josh Rosen <joshrosen@apache.org>
      
      Closes #1680 from JoshRosen/pyspark-daemon and squashes the following commits:
      
      5abbcb9 [Josh Rosen] Replace magic number: 4 -> EINTR
      5495dff [Josh Rosen] Throw IllegalStateException if worker launch fails.
      b79254d [Josh Rosen] Detect failed fork() calls; improve error logging.
      282c2c4 [Josh Rosen] Remove daemon.py exit logging, since it caused problems:
      8554536 [Josh Rosen] Fix daemon’s shutdown(); log shutdown reason.
      4e0fab8 [Josh Rosen] Remove shared-memory exit_flag; don't die on worker death.
      e9892b4 [Josh Rosen] [WIP] [SPARK-2764] Simplify daemon.py process structure.
      e8e0fd69
    • GuoQiang Li's avatar
      [SPARK-2800]: Exclude scalastyle-output.xml Apache RAT checks · a38d3c9e
      GuoQiang Li authored
      Author: GuoQiang Li <witgo@qq.com>
      
      Closes #1729 from witgo/SPARK-2800 and squashes the following commits:
      
      13ca966 [GuoQiang Li] Add scalastyle-output.xml  to .rat-excludes file
      a38d3c9e
    • Albert Chu's avatar
      [SPARK-2116] Load spark-defaults.conf from SPARK_CONF_DIR if set · 0da07da5
      Albert Chu authored
      If SPARK_CONF_DIR environment variable is set, search it for spark-defaults.conf.
      
      Author: Albert Chu <chu11@llnl.gov>
      
      Closes #1059 from chu11/SPARK-2116 and squashes the following commits:
      
      9f3ac94 [Albert Chu] SPARK-2116: If SPARK_CONF_DIR environment variable is set, search it for spark-defaults.conf.
      0da07da5
    • Yin Huai's avatar
      [SPARK-2212][SQL] Hash Outer Join (follow-up bug fix). · 3822f33f
      Yin Huai authored
      We need to carefully set the ouputPartitioning of the HashOuterJoin Operator. Otherwise, we may not correctly handle nulls.
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1721 from yhuai/SPARK-2212-BugFix and squashes the following commits:
      
      ed5eef7 [Yin Huai] Correctly choosing outputPartitioning for the HashOuterJoin operator.
      3822f33f
    • Davies Liu's avatar
      [SPARK-2010] [PySpark] [SQL] support nested structure in SchemaRDD · 880eabec
      Davies Liu authored
      Convert Row in JavaSchemaRDD into Array[Any] and unpickle them as tuple in Python, then convert them into namedtuple, so use can access fields just like attributes.
      
      This will let nested structure can be accessed as object, also it will reduce the size of serialized data and better performance.
      
      root
       |-- field1: integer (nullable = true)
       |-- field2: string (nullable = true)
       |-- field3: struct (nullable = true)
       |    |-- field4: integer (nullable = true)
       |    |-- field5: array (nullable = true)
       |    |    |-- element: integer (containsNull = false)
       |-- field6: array (nullable = true)
       |    |-- element: struct (containsNull = false)
       |    |    |-- field7: string (nullable = true)
      
      Then we can access them by row.field3.field5[0]  or row.field6[5].field7
      
      It also will infer the schema in Python, convert Row/dict/namedtuple/objects into tuple before serialization, then call applySchema in JVM. During inferSchema(), the top level of dict in row will be StructType, but any nested dictionary will be MapType.
      
      You can use pyspark.sql.Row to convert unnamed structure into Row object, make the RDD can be inferable. Such as:
      
      ctx.inferSchema(rdd.map(lambda x: Row(a=x[0], b=x[1]))
      
      Or you could use Row to create a class just like namedtuple, for example:
      
      Person = Row("name", "age")
      ctx.inferSchema(rdd.map(lambda x: Person(*x)))
      
      Also, you can call applySchema to apply an schema to a RDD of tuple/list and turn it into a SchemaRDD. The `schema` should be StructType, see the API docs for details.
      
      schema = StructType([StructField("name, StringType, True),
                                          StructType("age", IntegerType, True)])
      ctx.applySchema(rdd, schema)
      
      PS: In order to use namedtuple to inferSchema, you should make namedtuple picklable.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1598 from davies/nested and squashes the following commits:
      
      f1d15b6 [Davies Liu] verify schema with the first few rows
      8852aaf [Davies Liu] check type of schema
      abe9e6e [Davies Liu] address comments
      61b2292 [Davies Liu] add @deprecated to pythonToJavaMap
      1e5b801 [Davies Liu] improve cache of classes
      51aa135 [Davies Liu] use Row to infer schema
      e9c0d5c [Davies Liu] remove string typed schema
      353a3f2 [Davies Liu] fix code style
      63de8f8 [Davies Liu] fix typo
      c79ca67 [Davies Liu] fix serialization of nested data
      6b258b5 [Davies Liu] fix pep8
      9d8447c [Davies Liu] apply schema provided by string of names
      f5df97f [Davies Liu] refactor, address comments
      9d9af55 [Davies Liu] use arrry to applySchema and infer schema in Python
      84679b3 [Davies Liu] Merge branch 'master' of github.com:apache/spark into nested
      0eaaf56 [Davies Liu] fix doc tests
      b3559b4 [Davies Liu] use generated Row instead of namedtuple
      c4ddc30 [Davies Liu] fix conflict between name of fields and variables
      7f6f251 [Davies Liu] address all comments
      d69d397 [Davies Liu] refactor
      2cc2d45 [Davies Liu] refactor
      182fb46 [Davies Liu] refactor
      bc6e9e1 [Davies Liu] switch to new Schema API
      547bf3e [Davies Liu] Merge branch 'master' into nested
      a435b5a [Davies Liu] add docs and code refactor
      2c8debc [Davies Liu] Merge branch 'master' into nested
      644665a [Davies Liu] use tuple and namedtuple for schemardd
      880eabec
    • Joseph K. Bradley's avatar
      [SPARK-2796] [mllib] DecisionTree bug fix: ordered categorical features · 7058a539
      Joseph K. Bradley authored
      Bug: In DecisionTree, the method sequentialBinSearchForOrderedCategoricalFeatureInClassification() indexed bins from 0 to (math.pow(2, featureCategories.toInt - 1) - 1). This upper bound is the bound for unordered categorical features, not ordered ones. The upper bound should be the arity (i.e., max value) of the feature.
      
      Added new test to DecisionTreeSuite to catch this: "regression stump with categorical variables of arity 2"
      
      Bug fix: Modified upper bound discussed above.
      
      Also: Small improvements to coding style in DecisionTree.
      
      CC mengxr manishamde
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #1720 from jkbradley/decisiontree-bugfix2 and squashes the following commits:
      
      225822f [Joseph K. Bradley] Bug: In DecisionTree, the method sequentialBinSearchForOrderedCategoricalFeatureInClassification() indexed bins from 0 to (math.pow(2, featureCategories.toInt - 1) - 1). This upper bound is the bound for unordered categorical features, not ordered ones. The upper bound should be the arity (i.e., max value) of the feature.
      7058a539
    • Doris Xin's avatar
      [SPARK-2786][mllib] Python correlations · d88e6956
      Doris Xin authored
      Author: Doris Xin <doris.s.xin@gmail.com>
      
      Closes #1713 from dorx/pythonCorrelation and squashes the following commits:
      
      5f1e60c [Doris Xin] reviewer comments.
      46ff6eb [Doris Xin] reviewer comments.
      ad44085 [Doris Xin] style fix
      e69d446 [Doris Xin] fixed missed conflicts.
      eb5bf56 [Doris Xin] merge master
      cc9f725 [Doris Xin] units passed.
      9141a63 [Doris Xin] WIP2
      d199f1f [Doris Xin] Moved correlation names into a public object
      cd163d6 [Doris Xin] WIP
      d88e6956
    • Aaron Davidson's avatar
      SPARK-2791: Fix committing, reverting and state tracking in shuffle file consolidation · 78f2af58
      Aaron Davidson authored
      All changes from this PR are by mridulm and are drawn from his work in #1609. This patch is intended to fix all major issues related to shuffle file consolidation that mridulm found, while minimizing changes to the code, with the hope that it may be more easily merged into 1.1.
      
      This patch is **not** intended as a replacement for #1609, which provides many additional benefits, including fixes to ExternalAppendOnlyMap, improvements to DiskBlockObjectWriter's API, and several new unit tests.
      
      If it is feasible to merge #1609 for the 1.1 deadline, that is a preferable option.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #1678 from aarondav/consol and squashes the following commits:
      
      53b3f6d [Aaron Davidson] Correct behavior when writing unopened file
      701d045 [Aaron Davidson] Rebase with sort-based shuffle
      9160149 [Aaron Davidson] SPARK-2532: Minimal shuffle consolidation fixes
      78f2af58
    • joyyoj's avatar
      [SPARK-2379] Fix the bug that streaming's receiver may fall into a dead loop · b270309d
      joyyoj authored
      Author: joyyoj <sunshch@gmail.com>
      
      Closes #1694 from joyyoj/SPARK-2379 and squashes the following commits:
      
      d73790d [joyyoj] SPARK-2379 Fix the bug that streaming's receiver may fall into a dead loop
      22e7821 [joyyoj] Merge remote-tracking branch 'apache/master'
      3f4a602 [joyyoj] Merge remote-tracking branch 'remotes/apache/master'
      f4660c5 [joyyoj] [SPARK-1998] SparkFlumeEvent with body bigger than 1020 bytes are not read properly
      b270309d
    • zsxwing's avatar
      SPARK-1612: Fix potential resource leaks · f5d9bea2
      zsxwing authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-1612
      
      Move the "close" statements into a "finally" block.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #535 from zsxwing/SPARK-1612 and squashes the following commits:
      
      ae52f50 [zsxwing] Update to follow the code style
      549ba13 [zsxwing] SPARK-1612: Fix potential resource leaks
      f5d9bea2
Loading