Skip to content
Snippets Groups Projects
  1. Sep 14, 2014
    • Prashant Sharma's avatar
      [SPARK-3452] Maven build should skip publishing artifacts people shouldn... · f493f798
      Prashant Sharma authored
      ...'t depend on
      
      Publish local in maven term is `install`
      
      and publish otherwise is `deploy`
      
      So disabled both for following projects.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #2329 from ScrapCodes/SPARK-3452/maven-skip-install and squashes the following commits:
      
      257b79a [Prashant Sharma] [SPARK-3452] Maven build should skip publishing artifacts people shouldn't depend on
      f493f798
    • Bertrand Bossy's avatar
      SPARK-3039: Allow spark to be built using avro-mapred for hadoop2 · c243b21a
      Bertrand Bossy authored
      SPARK-3039: Adds the maven property "avro.mapred.classifier" to build spark-assembly with avro-mapred with support for the new Hadoop API. Sets this property to hadoop2 for Hadoop 2 profiles.
      
      I am not very familiar with maven, nor do I know whether this potentially breaks something in the hive part of spark. There might be a more elegant way of doing this.
      
      Author: Bertrand Bossy <bertrandbossy@gmail.com>
      
      Closes #1945 from bbossy/SPARK-3039 and squashes the following commits:
      
      c32ce59 [Bertrand Bossy] SPARK-3039: Allow spark to be built using avro-mapred for hadoop2
      c243b21a
    • Davies Liu's avatar
      [SPARK-3463] [PySpark] aggregate and show spilled bytes in Python · 4e3fbe8c
      Davies Liu authored
      Aggregate the number of bytes spilled into disks during aggregation or sorting, show them in Web UI.
      
      ![spilled](https://cloud.githubusercontent.com/assets/40902/4209758/4b995562-386d-11e4-97c1-8e838ee1d4e3.png)
      
      This patch is blocked by SPARK-3465. (It includes a fix for that).
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2336 from davies/metrics and squashes the following commits:
      
      e37df38 [Davies Liu] remove outdated comments
      1245eb7 [Davies Liu] remove the temporary fix
      ebd2f43 [Davies Liu] Merge branch 'master' into metrics
      7e4ad04 [Davies Liu] Merge branch 'master' into metrics
      fbe9029 [Davies Liu] show spilled bytes in Python in web ui
      4e3fbe8c
  2. Sep 13, 2014
    • Davies Liu's avatar
      [SPARK-3030] [PySpark] Reuse Python worker · 2aea0da8
      Davies Liu authored
      Reuse Python worker to avoid the overhead of fork() Python process for each tasks. It also tracks the broadcasts for each worker, avoid sending repeated broadcasts.
      
      This can reduce the time for dummy task from 22ms to 13ms (-40%). It can help to reduce the latency for Spark Streaming.
      
      For a job with broadcast (43M after compress):
      ```
          b = sc.broadcast(set(range(30000000)))
          print sc.parallelize(range(24000), 100).filter(lambda x: x in b.value).count()
      ```
      It will finish in 281s without reused worker, and it will finish in 65s with reused worker(4 CPUs). After reusing the worker, it can save about 9 seconds for transfer and deserialize the broadcast for each tasks.
      
      It's enabled by default, could be disabled by `spark.python.worker.reuse = false`.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2259 from davies/reuse-worker and squashes the following commits:
      
      f11f617 [Davies Liu] Merge branch 'master' into reuse-worker
      3939f20 [Davies Liu] fix bug in serializer in mllib
      cf1c55e [Davies Liu] address comments
      3133a60 [Davies Liu] fix accumulator with reused worker
      760ab1f [Davies Liu] do not reuse worker if there are any exceptions
      7abb224 [Davies Liu] refactor: sychronized with itself
      ac3206e [Davies Liu] renaming
      8911f44 [Davies Liu] synchronized getWorkerBroadcasts()
      6325fc1 [Davies Liu] bugfix: bid >= 0
      e0131a2 [Davies Liu] fix name of config
      583716e [Davies Liu] only reuse completed and not interrupted worker
      ace2917 [Davies Liu] kill python worker after timeout
      6123d0f [Davies Liu] track broadcasts for each worker
      8d2f08c [Davies Liu] reuse python worker
      2aea0da8
    • Michael Armbrust's avatar
      [SQL] Decrease partitions when testing · 0f8c4edf
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2164 from marmbrus/shufflePartitions and squashes the following commits:
      
      0da1e8c [Michael Armbrust] test hax
      ef2d985 [Michael Armbrust] more test hacks.
      2dabae3 [Michael Armbrust] more test fixes
      0bdbf21 [Michael Armbrust] Make parquet tests less order dependent
      b42eeab [Michael Armbrust] increase test parallelism
      80453d5 [Michael Armbrust] Decrease partitions when testing
      0f8c4edf
    • Cheng Lian's avatar
      [SPARK-3294][SQL] Eliminates boxing costs from in-memory columnar storage · 74049249
      Cheng Lian authored
      This is a major refactoring of the in-memory columnar storage implementation, aims to eliminate boxing costs from critical paths (building/accessing column buffers) as much as possible. The basic idea is to refactor all major interfaces into a row-based form and use them together with `SpecificMutableRow`. The difficult part is how to adapt all compression schemes, esp. `RunLengthEncoding` and `DictionaryEncoding`, to this design. Since in-memory compression is disabled by default for now, and this PR should be strictly better than before no matter in-memory compression is enabled or not, maybe I'll finish that part in another PR.
      
      **UPDATE** This PR also took the chance to optimize `HiveTableScan` by
      
      1. leveraging `SpecificMutableRow` to avoid boxing cost, and
      1. building specific `Writable` unwrapper functions a head of time to avoid per row pattern matching and branching costs.
      
      TODO
      
      - [x] Benchmark
      - [ ] ~~Eliminate boxing costs in `RunLengthEncoding`~~ (left to future PRs)
      - [ ] ~~Eliminate boxing costs in `DictionaryEncoding` (seems not easy to do without specializing `DictionaryEncoding` for every supported column type)~~  (left to future PRs)
      
      ## Micro benchmark
      
      The benchmark uses a 10 million line CSV table consists of bytes, shorts, integers, longs, floats and doubles, measures the time to build the in-memory version of this table, and the time to scan the whole in-memory table.
      
      Benchmark code can be found [here](https://gist.github.com/liancheng/fe70a148de82e77bd2c8#file-hivetablescanbenchmark-scala). Script used to generate the input table can be found [here](https://gist.github.com/liancheng/fe70a148de82e77bd2c8#file-tablegen-scala).
      
      Speedup:
      
      - Hive table scanning + column buffer building: **18.74%**
      
        The original benchmark uses 1K as in-memory batch size, when increased to 10K, it can be 28.32% faster.
      
      - In-memory table scanning: **7.95%**
      
      Before:
      
              | Building | Scanning
      ------- | -------- | --------
      1       | 16472    | 525
      2       | 16168    | 530
      3       | 16386    | 529
      4       | 16184    | 538
      5       | 16209    | 521
      Average | 16283.8  | 528.6
      
      After:
      
              | Building | Scanning
      ------- | -------- | --------
      1       | 13124    | 458
      2       | 13260    | 529
      3       | 12981    | 463
      4       | 13214    | 483
      5       | 13583    | 500
      Average | 13232.4  | 486.6
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2327 from liancheng/prevent-boxing/unboxing and squashes the following commits:
      
      4419fe4 [Cheng Lian] Addressing comments
      e5d2cf2 [Cheng Lian] Bug fix: should call setNullAt when field value is null to avoid NPE
      8b8552b [Cheng Lian] Only checks for partition batch pruning flag once
      489f97b [Cheng Lian] Bug fix: TableReader.fillObject uses wrong ordinals
      97bbc4e [Cheng Lian] Optimizes hive.TableReader by by providing specific Writable unwrappers a head of time
      3dc1f94 [Cheng Lian] Minor changes to eliminate row object creation
      5b39cb9 [Cheng Lian] Lowers log level of compression scheme details
      f2a7890 [Cheng Lian] Use SpecificMutableRow in InMemoryColumnarTableScan to avoid boxing
      9cf30b0 [Cheng Lian] Added row based ColumnType.append/extract
      456c366 [Cheng Lian] Made compression decoder row based
      edac3cd [Cheng Lian] Makes ColumnAccessor.extractSingle row based
      8216936 [Cheng Lian] Removes boxing cost in IntDelta and LongDelta by providing specialized implementations
      b70d519 [Cheng Lian] Made some in-memory columnar storage interfaces row-based
      74049249
    • Cheng Lian's avatar
      [SPARK-3481][SQL] Removes the evil MINOR HACK · 184cd51c
      Cheng Lian authored
      This is a follow up of #2352. Now we can finally remove the evil "MINOR HACK", which covered up the eldest bug in the history of Spark SQL (see details [here](https://github.com/apache/spark/pull/2352#issuecomment-55440621)).
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2377 from liancheng/remove-evil-minor-hack and squashes the following commits:
      
      0869c78 [Cheng Lian] Removes the evil MINOR HACK
      184cd51c
    • Nicholas Chammas's avatar
      [SQL] [Docs] typo fixes · a523ceaf
      Nicholas Chammas authored
      * Fixed random typo
      * Added in missing description for DecimalType
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      
      Closes #2367 from nchammas/patch-1 and squashes the following commits:
      
      aa528be [Nicholas Chammas] doc fix for SQL DecimalType
      3247ac1 [Nicholas Chammas] [SQL] [Docs] typo fixes
      a523ceaf
    • Reynold Xin's avatar
      Proper indent for the previous commit. · b4dded40
      Reynold Xin authored
      b4dded40
    • Sean Owen's avatar
      SPARK-3470 [CORE] [STREAMING] Add Closeable / close() to Java context objects · feaa3706
      Sean Owen authored
      ...  that expose a stop() lifecycle method. This doesn't add `AutoCloseable`, which is Java 7+ only. But it should be possible to use try-with-resources on a `Closeable` in Java 7, as long as the `close()` does not throw a checked exception, and these don't. Q.E.D.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #2346 from srowen/SPARK-3470 and squashes the following commits:
      
      612c21d [Sean Owen] Add Closeable / close() to Java context objects that expose a stop() lifecycle method
      feaa3706
  3. Sep 12, 2014
    • Yin Huai's avatar
      [SQL][Docs] Update SQL programming guide to show the correct default value of... · e11eeb71
      Yin Huai authored
      [SQL][Docs] Update SQL programming guide to show the correct default value of containsNull in an ArrayType
      
      After #1889, the default value of `containsNull` in an `ArrayType` is `true`.
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #2374 from yhuai/containsNull and squashes the following commits:
      
      dc609a3 [Yin Huai] Update the SQL programming guide to show the correct default value of containsNull in an ArrayType (the default value is true instead of false).
      e11eeb71
    • Reynold Xin's avatar
      [SPARK-3469] Make sure all TaskCompletionListener are called even with failures · 2584ea5b
      Reynold Xin authored
      This is necessary because we rely on this callback interface to clean resources up. The old behavior would lead to resource leaks.
      
      Note that this also changes the fault semantics of TaskCompletionListener. Previously failures in TaskCompletionListeners would result in the task being reported immediately. With this change, we report the exception at the end, and the reported exception is a TaskCompletionListenerException that contains all the exception messages.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #2343 from rxin/taskcontext-callback and squashes the following commits:
      
      a3845b2 [Reynold Xin] Mark TaskCompletionListenerException as private[spark].
      ac5baea [Reynold Xin] Removed obsolete comment.
      aa68ea4 [Reynold Xin] Throw an exception if task completion callback fails.
      29b6162 [Reynold Xin] oops compilation failed.
      1cb444d [Reynold Xin] [SPARK-3469] Call all TaskCompletionListeners even if some fail.
      2584ea5b
    • Cheng Lian's avatar
      [SPARK-3515][SQL] Moves test suite setup code to beforeAll rather than in constructor · 6d887db7
      Cheng Lian authored
      Please refer to the JIRA ticket for details.
      
      **NOTE** We should check all test suites that do similar initialization-like side effects in their constructors. This PR only fixes `ParquetMetastoreSuite` because it breaks our Jenkins Maven build.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2375 from liancheng/say-no-to-constructor and squashes the following commits:
      
      0ceb75b [Cheng Lian] Moves test suite setup code to beforeAll rather than in constructor
      6d887db7
    • Davies Liu's avatar
      [SPARK-3500] [SQL] use JavaSchemaRDD as SchemaRDD._jschema_rdd · 885d1621
      Davies Liu authored
      Currently, SchemaRDD._jschema_rdd is SchemaRDD, the Scala API (coalesce(), repartition()) can not been called in Python easily, there is no way to specify the implicit parameter `ord`. The _jrdd is an JavaRDD, so _jschema_rdd should also be JavaSchemaRDD.
      
      In this patch, change _schema_rdd to JavaSchemaRDD, also added an assert for it. If some methods are missing from JavaSchemaRDD, then it's called by _schema_rdd.baseSchemaRDD().xxx().
      
      BTW, Do we need JavaSQLContext?
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2369 from davies/fix_schemardd and squashes the following commits:
      
      abee159 [Davies Liu] use JavaSchemaRDD as SchemaRDD._jschema_rdd
      885d1621
    • Davies Liu's avatar
      [SPARK-3094] [PySpark] compatitable with PyPy · 71af030b
      Davies Liu authored
      After this patch, we can run PySpark in PyPy (testing with PyPy 2.3.1 in Mac 10.9), for example:
      
      ```
      PYSPARK_PYTHON=pypy ./bin/spark-submit wordcount.py
      ```
      
      The performance speed up will depend on work load (from 20% to 3000%). Here are some benchmarks:
      
       Job | CPython 2.7 | PyPy 2.3.1  | Speed up
       ------- | ------------ | ------------- | -------
       Word Count | 41s   | 15s  | 2.7x
       Sort | 46s |  44s | 1.05x
       Stats | 174s | 3.6s | 48x
      
      Here is the code used for benchmark:
      
      ```python
      rdd = sc.textFile("text")
      def wordcount():
          rdd.flatMap(lambda x:x.split('/'))\
              .map(lambda x:(x,1)).reduceByKey(lambda x,y:x+y).collectAsMap()
      def sort():
          rdd.sortBy(lambda x:x, 1).count()
      def stats():
          sc.parallelize(range(1024), 20).flatMap(lambda x: xrange(5024)).stats()
      ```
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2144 from davies/pypy and squashes the following commits:
      
      9aed6c5 [Davies Liu] use protocol 2 in CloudPickle
      4bc1f04 [Davies Liu] refactor
      b20ab3a [Davies Liu] pickle sys.stdout and stderr in portable way
      3ca2351 [Davies Liu] Merge branch 'master' into pypy
      fae8b19 [Davies Liu] improve attrgetter, add tests
      591f830 [Davies Liu] try to run tests with PyPy in run-tests
      c8d62ba [Davies Liu] cleanup
      f651fd0 [Davies Liu] fix tests using array with PyPy
      1b98fb3 [Davies Liu] serialize itemgetter/attrgetter in portable ways
      3c1dbfe [Davies Liu] Merge branch 'master' into pypy
      42fb5fa [Davies Liu] Merge branch 'master' into pypy
      cb2d724 [Davies Liu] fix tests
      9986692 [Davies Liu] Merge branch 'master' into pypy
      25b4ca7 [Davies Liu] support PyPy
      71af030b
    • Thomas Graves's avatar
      [SPARK-3456] YarnAllocator on alpha can lose container requests to RM · 25311c2c
      Thomas Graves authored
      Author: Thomas Graves <tgraves@apache.org>
      
      Closes #2373 from tgravescs/SPARK-3456 and squashes the following commits:
      
      77e9532 [Thomas Graves] [SPARK-3456] YarnAllocator on alpha can lose container requests to RM
      25311c2c
    • Marcelo Vanzin's avatar
      [SPARK-3217] Add Guava to classpath when SPARK_PREPEND_CLASSES is set. · af258382
      Marcelo Vanzin authored
      When that option is used, the compiled classes from the build directory
      are prepended to the classpath. Now that we avoid packaging Guava, that
      means we have classes referencing the original Guava location in the app's
      classpath, so errors happen.
      
      For that case, add Guava manually to the classpath.
      
      Note: if Spark is compiled with "-Phadoop-provided", it's tricky to
      make things work with SPARK_PREPEND_CLASSES, because you need to add
      the Hadoop classpath using SPARK_CLASSPATH and that means the older
      Hadoop Guava overrides the newer one Spark needs. So someone using
      SPARK_PREPEND_CLASSES needs to remember to not use that profile.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #2141 from vanzin/SPARK-3217 and squashes the following commits:
      
      b967324 [Marcelo Vanzin] [SPARK-3217] Add Guava to classpath when SPARK_PREPEND_CLASSES is set.
      af258382
    • Sandy Ryza's avatar
      SPARK-3014. Log a more informative messages in a couple failure scenario... · 1d767967
      Sandy Ryza authored
      ...s
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #1934 from sryza/sandy-spark-3014 and squashes the following commits:
      
      ae19cc1 [Sandy Ryza] SPARK-3014. Log a more informative messages in a couple failure scenarios
      1d767967
    • Ankur Dave's avatar
      [SPARK-3427] [GraphX] Avoid active vertex tracking in static PageRank · 15a56459
      Ankur Dave authored
      GraphX's current implementation of static (fixed iteration count) PageRank uses the Pregel API. This unnecessarily tracks active vertices, even though in static PageRank all vertices are always active. Active vertex tracking incurs the following costs:
      
      1. A shuffle per iteration to ship the active sets to the edge partitions.
      2. A hash table creation per iteration at each partition to index the active sets for lookup.
      3. A hash lookup per edge to check whether the source vertex is active.
      
      I reimplemented static PageRank using the lower-level GraphX API instead of the Pregel API. In benchmarks on a 16-node m2.4xlarge cluster, this provided a 23% speedup (from 514 s to 397 s, mean over 3 trials) for 10 iterations of PageRank on a synthetic graph with 10M vertices and 1.27B edges.
      
      Author: Ankur Dave <ankurdave@gmail.com>
      
      Closes #2308 from ankurdave/SPARK-3427 and squashes the following commits:
      
      449996a [Ankur Dave] Avoid unnecessary active vertex tracking in static PageRank
      15a56459
    • Patrick Wendell's avatar
      MAINTENANCE: Automated closing of pull requests. · eae81b0b
      Patrick Wendell authored
      This commit exists to close the following pull requests on Github:
      
      Closes #930 (close requested by 'andrewor14')
      Closes #867 (close requested by 'marmbrus')
      Closes #1829 (close requested by 'marmbrus')
      Closes #1131 (close requested by 'JoshRosen')
      Closes #1571 (close requested by 'andrewor14')
      Closes #2359 (close requested by 'andrewor14')
      eae81b0b
    • Cheng Hao's avatar
      [SPARK-3481] [SQL] Eliminate the error log in local Hive comparison test · 8194fc66
      Cheng Hao authored
      Logically, we should remove the Hive Table/Database first and then reset the Hive configuration, repoint to the new data warehouse directory etc.
      Otherwise it raised exceptions like "Database doesn't not exists: default" in the local testing.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #2352 from chenghao-intel/test_hive and squashes the following commits:
      
      74fd76b [Cheng Hao] eliminate the error log
      8194fc66
    • RJ Nowling's avatar
      [PySpark] Add blank line so that Python RDD.top() docstring renders correctly · 53337762
      RJ Nowling authored
      Author: RJ Nowling <rnowling@gmail.com>
      
      Closes #2370 from rnowling/python_rdd_docstrings and squashes the following commits:
      
      5230574 [RJ Nowling] Add blank line so that Python RDD.top() docstring renders correctly
      53337762
    • Mark G. Whitney's avatar
      [SPARK-2558][DOCS] Add --queue example to YARN doc · f116f76b
      Mark G. Whitney authored
      Put original YARN queue spark-submit arg description in
      running-on-yarn html table and example command line
      
      Author: Mark G. Whitney <mark@whitneyindustries.com>
      
      Closes #2218 from kramimus/2258-yarndoc and squashes the following commits:
      
      4b5d808 [Mark G. Whitney] remove yarn queue config
      f8cda0d [Mark G. Whitney] [SPARK-2558][DOCS] Add spark.yarn.queue description to YARN doc
      f116f76b
    • Joseph K. Bradley's avatar
      [SPARK-3160] [SPARK-3494] [mllib] DecisionTree: eliminate pre-allocated... · b8634df1
      Joseph K. Bradley authored
      [SPARK-3160] [SPARK-3494] [mllib]  DecisionTree: eliminate pre-allocated nodes, parentImpurities arrays. Memory calc bug fix.
      
      This PR includes some code simplifications and re-organization which will be helpful for implementing random forests.  The main changes are that the nodes and parentImpurities arrays are no longer pre-allocated in the main train() method.
      
      Also added 2 bug fixes:
      * maxMemoryUsage calculation
      * over-allocation of space for bins in DTStatsAggregator for unordered features.
      
      Relation to RFs:
      * Since RFs will be deeper and will therefore be more likely sparse (not full trees), it could be a cost savings to avoid pre-allocating a full tree.
      * The associated re-organization also reduces bookkeeping, which will make RFs easier to implement.
      * The return code doneTraining may be generalized to include cases such as nodes ready for local training.
      
      Details:
      
      No longer pre-allocate parentImpurities array in main train() method.
      * parentImpurities values are now stored in individual nodes (in Node.stats.impurity).
      * These were not really needed.  They were used in calculateGainForSplit(), but they can be calculated anyways using parentNodeAgg.
      
      No longer using Node.build since tree structure is constructed on-the-fly.
      * Did not eliminate since it is public (Developer) API.  Marked as deprecated.
      
      Eliminated pre-allocated nodes array in main train() method.
      * Nodes are constructed and added to the tree structure as needed during training.
      * Moved tree construction from main train() method into findBestSplitsPerGroup() since there is no need to keep the (split, gain) array for an entire level of nodes.  Only one element of that array is needed at a time, so we do not the array.
      
      findBestSplits() now returns 2 items:
      * rootNode (newly created root node on first iteration, same root node on later iterations)
      * doneTraining (indicating if all nodes at that level were leafs)
      
      Updated DecisionTreeSuite.  Notes:
      * Improved test "Second level node building with vs. without groups"
      ** generateOrderedLabeledPoints() modified so that it really does require 2 levels of internal nodes.
      * Related update: Added Node.deepCopy (private[tree]), used for test suite
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #2341 from jkbradley/dt-spark-3160 and squashes the following commits:
      
      07dd1ee [Joseph K. Bradley] Fixed overflow bug with computing maxMemoryUsage in DecisionTree.  Also fixed bug with over-allocating space in DTStatsAggregator for unordered features.
      debe072 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-spark-3160
      5c4ac33 [Joseph K. Bradley] Added check in Strategy to make sure minInstancesPerNode >= 1
      0dd4d87 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-spark-3160
      306120f [Joseph K. Bradley] Fixed typo in DecisionTreeModel.scala doc
      eaa1dcf [Joseph K. Bradley] Added topNode doc in DecisionTree and scalastyle fix
      d4d7864 [Joseph K. Bradley] Marked Node.build as deprecated
      d4dbb99 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-spark-3160
      1a8f0ad [Joseph K. Bradley] Eliminated pre-allocated nodes array in main train() method. * Nodes are constructed and added to the tree structure as needed during training.
      2ab763b [Joseph K. Bradley] Simplifications to DecisionTree code:
      b8634df1
  4. Sep 11, 2014
    • Davies Liu's avatar
      [SPARK-3465] fix task metrics aggregation in local mode · 42904b8d
      Davies Liu authored
      Before overwrite t.taskMetrics, take a deepcopy of it.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2338 from davies/fix_metric and squashes the following commits:
      
      a5cdb63 [Davies Liu] Merge branch 'master' into fix_metric
      7c879e0 [Davies Liu] add more comments
      754b5b8 [Davies Liu] copy taskMetrics only when isLocal is true
      5ca26dc [Davies Liu] fix task metrics aggregation in local mode
      42904b8d
    • witgo's avatar
      SPARK-2482: Resolve sbt warnings during build · 33c7a738
      witgo authored
      At the same time, import the `scala.language.postfixOps` and ` org.scalatest.time.SpanSugar._` cause `scala.language.postfixOps` doesn't work
      
      Author: witgo <witgo@qq.com>
      
      Closes #1330 from witgo/sbt_warnings3 and squashes the following commits:
      
      179ba61 [witgo] Resolve sbt warnings during build
      33c7a738
    • Cody Koeninger's avatar
      SPARK-3462 push down filters and projections into Unions · f858f466
      Cody Koeninger authored
      Author: Cody Koeninger <cody.koeninger@mediacrossing.com>
      
      Closes #2345 from koeninger/SPARK-3462 and squashes the following commits:
      
      5c8d24d [Cody Koeninger] SPARK-3462 remove now-unused parameter
      0788691 [Cody Koeninger] SPARK-3462 add tests, handle compatible schema with different aliases, per marmbrus feedback
      ef47b3b [Cody Koeninger] SPARK-3462 push down filters and projections into Unions
      f858f466
    • Andrew Ash's avatar
      [SPARK-3429] Don't include the empty string "" as a defaultAclUser · ce59725b
      Andrew Ash authored
      Changes logging from
      
      ```
      14/09/05 02:01:08 INFO SecurityManager: Changing view acls to: aash,
      14/09/05 02:01:08 INFO SecurityManager: Changing modify acls to: aash,
      14/09/05 02:01:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(aash, ); users with modify permissions: Set(aash, )
      ```
      to
      ```
      14/09/05 02:28:28 INFO SecurityManager: Changing view acls to: aash
      14/09/05 02:28:28 INFO SecurityManager: Changing modify acls to: aash
      14/09/05 02:28:28 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(aash); users with modify permissions: Set(aash)
      ```
      
      Note that the first set of logs have a Set of size 2 containing "aash" and the empty string ""
      
      cc tgravescs
      
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #2286 from ash211/empty-default-acl and squashes the following commits:
      
      18cc612 [Andrew Ash] Use .isEmpty instead of ==""
      cf973a1 [Andrew Ash] Don't include the empty string "" as a defaultAclUser
      ce59725b
    • Andrew Or's avatar
      [Spark-3490] Disable SparkUI for tests · 6324eb7b
      Andrew Or authored
      We currently open many ephemeral ports during the tests, and as a result we occasionally can't bind to new ones. This has caused the `DriverSuite` and the `SparkSubmitSuite` to fail intermittently.
      
      By disabling the `SparkUI` when it's not needed, we already cut down on the number of ports opened significantly, on the order of the number of `SparkContexts` ever created. We must keep it enabled for a few tests for the UI itself, however.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2363 from andrewor14/disable-ui-for-tests and squashes the following commits:
      
      332a7d5 [Andrew Or] No need to set spark.ui.port to 0 anymore
      30c93a2 [Andrew Or] Simplify streaming UISuite
      a431b84 [Andrew Or] Fix streaming test failures
      8f5ae53 [Andrew Or] Fix no new line at the end
      29c9b5b [Andrew Or] Disable SparkUI for tests
      6324eb7b
    • Yin Huai's avatar
      [SPARK-3390][SQL] sqlContext.jsonRDD fails on a complex structure of JSON... · 4bc9e046
      Yin Huai authored
      [SPARK-3390][SQL] sqlContext.jsonRDD fails on a complex structure of JSON array and JSON object nesting
      
      This PR aims to correctly handle JSON arrays in the type of `ArrayType(...(ArrayType(StructType)))`.
      
      JIRA: https://issues.apache.org/jira/browse/SPARK-3390.
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #2364 from yhuai/SPARK-3390 and squashes the following commits:
      
      46db418 [Yin Huai] Handle JSON arrays in the type of ArrayType(...(ArrayType(StructType))).
      4bc9e046
    • Cheng Hao's avatar
      [SPARK-2917] [SQL] Avoid table creation in logical plan analyzing for CTAS · ca83f1e2
      Cheng Hao authored
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #1846 from chenghao-intel/ctas and squashes the following commits:
      
      56a0578 [Cheng Hao] remove the unused imports
      9a57abc [Cheng Hao] Avoid table creation in logical plan analyzing
      ca83f1e2
    • Davies Liu's avatar
      [SPARK-3047] [PySpark] add an option to use str in textFileRDD · 1ef656ea
      Davies Liu authored
      str is much efficient than unicode (both CPU and memory), it'e better to use str in textFileRDD. In order to keep compatibility, use unicode by default. (Maybe change it in the future).
      
      use_unicode=True:
      
      daviesliudm:~/work/spark$ time python wc.py
      (u'./universe/spark/sql/core/target/java/org/apache/spark/sql/execution/ExplainCommand$.java', 7776)
      
      real	2m8.298s
      user	0m0.185s
      sys	0m0.064s
      
      use_unicode=False
      
      daviesliudm:~/work/spark$ time python wc.py
      ('./universe/spark/sql/core/target/java/org/apache/spark/sql/execution/ExplainCommand$.java', 7776)
      
      real	1m26.402s
      user	0m0.182s
      sys	0m0.062s
      
      We can see that it got 32% improvement!
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1951 from davies/unicode and squashes the following commits:
      
      8352d57 [Davies Liu] update version number
      a286f2f [Davies Liu] rollback loads()
      85246e5 [Davies Liu] add docs for use_unicode
      a0295e1 [Davies Liu] add an option to use str in textFile()
      1ef656ea
    • Chris Cope's avatar
      [SPARK-2140] Updating heap memory calculation for YARN stable and alpha. · ed1980ff
      Chris Cope authored
      Updated pull request, reflecting YARN stable and alpha states. I am getting intermittent test failures on my own test infrastructure. Is that tracked anywhere yet?
      
      Author: Chris Cope <ccope@resilientscience.com>
      
      Closes #2253 from copester/master and squashes the following commits:
      
      5ad89da [Chris Cope] [SPARK-2140] Removing calculateAMMemory functions since they are no longer needed.
      52b4e45 [Chris Cope] [SPARK-2140] Updating heap memory calculation for YARN stable and alpha.
      ed1980ff
  5. Sep 10, 2014
    • Aaron Staple's avatar
      [SPARK-2781][SQL] Check resolution of LogicalPlans in Analyzer. · c27718f3
      Aaron Staple authored
      LogicalPlan contains a ‘resolved’ attribute indicating that all of its execution requirements have been resolved. This attribute is not checked before query execution. The analyzer contains a step to check that all Expressions are resolved, but this is not equivalent to checking all LogicalPlans. In particular, the Union plan’s implementation of ‘resolved’ verifies that the types of its children’s columns are compatible. Because the analyzer does not check that a Union plan is resolved, it is possible to execute a Union plan that outputs different types in the same column.  See SPARK-2781 for an example.
      
      This patch adds two checks to the analyzer’s CheckResolution rule. First, each logical plan is checked to see if it is not resolved despite its children being resolved. This allows the ‘problem’ unresolved plan to be included in the TreeNodeException for reporting. Then as a backstop the root plan is checked to see if it is resolved, which recursively checks that the entire plan tree is resolved. Note that the resolved attribute is implemented recursively, and this patch also explicitly checks the resolved attribute on each logical plan in the tree. I assume the query plan trees will not be large enough for this redundant checking to meaningfully impact performance.
      
      Because this patch starts validating that LogicalPlans are resolved before execution, I had to fix some cases where unresolved plans were passing through the analyzer as part of the implementation of the hive query system. In particular, HiveContext applies the CreateTables and PreInsertionCasts, and ExtractPythonUdfs rules manually after the analyzer runs. I moved these rules to the analyzer stage (for hive queries only), in the process completing a code TODO indicating the rules should be moved to the analyzer.
      
      It’s worth noting that moving the CreateTables rule means introducing an analyzer rule with a significant side effect - in this case the side effect is creating a hive table. The rule will only attempt to create a table once even if its batch is executed multiple times, because it converts the InsertIntoCreatedTable plan it matches against into an InsertIntoTable. Additionally, these hive rules must be added to the Resolution batch rather than as a separate batch because hive rules rules may be needed to resolve non-root nodes, leaving the root to be resolved on a subsequent batch iteration. For example, the hive compatibility test auto_smb_mapjoin_14, and others, make use of a query plan where the root is a Union and its children are each a hive InsertIntoTable.
      
      Mixing the custom hive rules with standard analyzer rules initially resulted in an additional failure because of policy differences between spark sql and hive when casting a boolean to a string. Hive casts booleans to strings as “true” / “false” while spark sql casts booleans to strings as “1” / “0” (causing the cast1.q test to fail). This behavior is a result of the BooleanCasts rule in HiveTypeCoercion.scala, and from looking at the implementation of BooleanCasts I think converting to to “1”/“0” is potentially a programming mistake. (If the BooleanCasts rule is disabled, casting produces “true”/“false” instead.) I believe “true” / “false” should be the behavior for spark sql - I changed the behavior so bools are converted to “true”/“false” to be consistent with hive, and none of the existing spark tests failed.
      
      Finally, in some initial testing with hive it appears that an implicit type coercion of boolean to string results in a lowercase string, e.g. CONCAT( TRUE, “” ) -> “true” while an explicit cast produces an all caps string, e.g. CAST( TRUE AS STRING ) -> “TRUE”.  The change I’ve made just converts to lowercase strings in all cases.  I believe it is at least more correct than the existing spark sql implementation where all Cast expressions become “1” / “0”.
      
      Author: Aaron Staple <aaron.staple@gmail.com>
      
      Closes #1706 from staple/SPARK-2781 and squashes the following commits:
      
      32683c4 [Aaron Staple] Fix compilation failure due to merge.
      7c77fda [Aaron Staple] Move ExtractPythonUdfs to Analyzer's extendedRules in HiveContext.
      d49bfb3 [Aaron Staple] Address review comments.
      915b690 [Aaron Staple] Fix merge issue causing compilation failure.
      701dcd2 [Aaron Staple] [SPARK-2781][SQL] Check resolution of LogicalPlans in Analyzer.
      c27718f3
    • Michael Armbrust's avatar
      [SPARK-3447][SQL] Remove explicit conversion with JListWrapper to avoid NPE · f92cde24
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2323 from marmbrus/kryoJListNPE and squashes the following commits:
      
      9634f11 [Michael Armbrust] Rollback JSON RDD changes
      4d4d93c [Michael Armbrust] Merge remote-tracking branch 'origin/master' into kryoJListNPE
      646976b [Michael Armbrust] Fix JSON RDD Conversion too
      59065bc [Michael Armbrust] Remove explicit conversion to avoid NPE
      f92cde24
    • Michael Armbrust's avatar
      [SQL] Add test case with workaround for reading partitioned Avro files · 84e2c8bf
      Michael Armbrust authored
      In order to read from partitioned Avro files we need to also set the `SERDEPROPERTIES` since `TBLPROPERTIES` are not passed to the initialization.  This PR simply adds a test to make sure we don't break this workaround.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2340 from marmbrus/avroPartitioned and squashes the following commits:
      
      6b969d6 [Michael Armbrust] fix style
      fea2124 [Michael Armbrust] Add test case with workaround for reading partitioned avro files.
      84e2c8bf
    • qiping.lqp's avatar
      [SPARK-2207][SPARK-3272][MLLib]Add minimum information gain and minimum... · 79cdb9b6
      qiping.lqp authored
      [SPARK-2207][SPARK-3272][MLLib]Add minimum information gain and minimum instances per node as training parameters for decision tree.
      
      These two parameters can act as early stop rules to do pre-pruning. When a split cause cause left or right child to have less than `minInstancesPerNode` or has less information gain than `minInfoGain`, current node will not be split by this split.
      
      When there is no possible splits that satisfy requirements, there is no useful information gain stats, but we still need to calculate the predict value for current node. So I separated calculation of predict from calculation of information gain, which can also save computation when the number of possible splits is large. Please see [SPARK-3272](https://issues.apache.org/jira/browse/SPARK-3272) for more details.
      
      CC: mengxr manishamde jkbradley, please help me review this, thanks.
      
      Author: qiping.lqp <qiping.lqp@alibaba-inc.com>
      Author: chouqin <liqiping1991@gmail.com>
      
      Closes #2332 from chouqin/dt-preprune and squashes the following commits:
      
      f1d11d1 [chouqin] fix typo
      c7ebaf1 [chouqin] fix typo
      39f9b60 [chouqin] change edge `minInstancesPerNode` to 2 and add one more test
      0278a11 [chouqin] remove `noSplit` and set `Predict` private to tree
      d593ec7 [chouqin] fix docs and change minInstancesPerNode to 1
      efcc736 [qiping.lqp] fix bug
      10b8012 [qiping.lqp] fix style
      6728fad [qiping.lqp] minor fix: remove empty lines
      bb465ca [qiping.lqp] Merge branch 'master' of https://github.com/apache/spark into dt-preprune
      cadd569 [qiping.lqp] add api docs
      46b891f [qiping.lqp] fix bug
      e72c7e4 [qiping.lqp] add comments
      845c6fa [qiping.lqp] fix style
      f195e83 [qiping.lqp] fix style
      987cbf4 [qiping.lqp] fix bug
      ff34845 [qiping.lqp] separate calculation of predict of node from calculation of info gain
      ac42378 [qiping.lqp] add min info gain and min instances per node parameters in decision tree
      79cdb9b6
    • WangTaoTheTonic's avatar
      [SPARK-3411] Improve load-balancing of concurrently-submitted drivers across workers · 558962a8
      WangTaoTheTonic authored
      If the waiting driver array is too big, the drivers in it will be dispatched to the first worker we get(if it has enough resources), with or without the Randomization.
      
      We should do randomization every time we dispatch a driver, in order to better balance drivers.
      
      Author: WangTaoTheTonic <barneystinson@aliyun.com>
      Author: WangTao <barneystinson@aliyun.com>
      
      Closes #1106 from WangTaoTheTonic/fixBalanceDrivers and squashes the following commits:
      
      d1a928b [WangTaoTheTonic] Minor adjustment
      b6560cf [WangTaoTheTonic] solve the shuffle problem for HashSet
      f674e59 [WangTaoTheTonic] add comment and minor fix
      2835929 [WangTao] solve the failed test and avoid filtering
      2ca3091 [WangTao] fix checkstyle
      bc91bb1 [WangTao] Avoid shuffle every time we schedule the driver using round robin
      bbc7087 [WangTaoTheTonic] Optimize the schedule in Master
      558962a8
    • Wenchen Fan's avatar
      [SPARK-2096][SQL] Correctly parse dot notations · e4f4886d
      Wenchen Fan authored
      First let me write down the current `projections` grammar of spark sql:
      
          expression                : orExpression
          orExpression              : andExpression {"or" andExpression}
          andExpression             : comparisonExpression {"and" comparisonExpression}
          comparisonExpression      : termExpression | termExpression "=" termExpression | termExpression ">" termExpression | ...
          termExpression            : productExpression {"+"|"-" productExpression}
          productExpression         : baseExpression {"*"|"/"|"%" baseExpression}
          baseExpression            : expression "[" expression "]" | ... | ident | ...
          ident                     : identChar {identChar | digit} | delimiters | ...
          identChar                 : letter | "_" | "."
          delimiters                : "," | ";" | "(" | ")" | "[" | "]" | ...
          projection                : expression [["AS"] ident]
          projections               : projection { "," projection}
      
      For something like `a.b.c[1]`, it will be parsed as:
      <img src="http://img51.imgspice.com/i/03008/4iltjsnqgmtt_t.jpg" border=0>
      But for something like `a[1].b`, the current grammar can't parse it correctly.
      A simple solution is written in `ParquetQuerySuite#NestedSqlParser`, changed grammars are:
      
          delimiters                : "." | "," | ";" | "(" | ")" | "[" | "]" | ...
          identChar                 : letter | "_"
          baseExpression            : expression "[" expression "]" | expression "." ident | ... | ident | ...
      This works well, but can't cover some corner case like `select t.a.b from table as t`:
      <img src="http://img51.imgspice.com/i/03008/v2iau3hoxoxg_t.jpg" border=0>
      `t.a.b` parsed as `GetField(GetField(UnResolved("t"), "a"), "b")` instead of `GetField(UnResolved("t.a"), "b")` using this new grammar.
      However, we can't resolve `t` as it's not a filed, but the whole table.(if we could do this, then `select t from table as t` is legal, which is unexpected)
      My solution is:
      
          dotExpressionHeader       : ident "." ident
          baseExpression            : expression "[" expression "]" | expression "." ident | ... | dotExpressionHeader  | ident | ...
      I passed all test cases under sql locally and add a more complex case.
      "arrayOfStruct.field1 to access all values of field1" is not supported yet. Since this PR has changed a lot of code, I will open another PR for it.
      I'm not familiar with the latter optimize phase, please correct me if I missed something.
      
      Author: Wenchen Fan <cloud0fan@163.com>
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2230 from cloud-fan/dot and squashes the following commits:
      
      e1a8898 [Wenchen Fan] remove support for arbitrary nested arrays
      ee8a724 [Wenchen Fan] rollback LogicalPlan, support dot operation on nested array type
      a58df40 [Michael Armbrust] add regression test for doubly nested data
      16bc4c6 [Wenchen Fan] some enhance
      95d733f [Wenchen Fan] split long line
      dc31698 [Wenchen Fan] SPARK-2096 Correctly parse dot notations
      e4f4886d
    • Sandy Ryza's avatar
      SPARK-1713. Use a thread pool for launching executors. · 1f4a648d
      Sandy Ryza authored
      This patch copies the approach used in the MapReduce application master for launching containers.
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #663 from sryza/sandy-spark-1713 and squashes the following commits:
      
      036550d [Sandy Ryza] SPARK-1713. [YARN] Use a threadpool for launching executor containers
      1f4a648d
Loading