Skip to content
Snippets Groups Projects
  1. Jul 29, 2014
    • Michael Armbrust's avatar
      [SPARK-2054][SQL] Code Generation for Expression Evaluation · 84467468
      Michael Armbrust authored
      Adds a new method for evaluating expressions using code that is generated though Scala reflection.  This functionality is configured by the SQLConf option `spark.sql.codegen` and is currently turned off by default.
      
      Evaluation can be done in several specialized ways:
       - *Projection* - Given an input row, produce a new row from a set of expressions that define each column in terms of the input row.  This can either produce a new Row object or perform the projection in-place on an existing Row (MutableProjection).
       - *Ordering* - Compares two rows based on a list of `SortOrder` expressions
       - *Condition* - Returns `true` or `false` given an input row.
      
      For each of the above operations there is both a Generated and Interpreted version.  When generation for a given expression type is undefined, the code generator falls back on calling the `eval` function of the expression class.  Even without custom code, there is still a potential speed up, as loops are unrolled and code can still be inlined by JIT.
      
      This PR also contains a new type of Aggregation operator, `GeneratedAggregate`, that performs aggregation by using generated `Projection` code.  Currently the required expression rewriting only works for simple aggregations like `SUM` and `COUNT`.  This functionality will be extended in a future PR.
      
      This PR also performs several clean ups that simplified the implementation:
       - The notion of `Binding` all expressions in a tree automatically before query execution has been removed.  Instead it is the responsibly of an operator to provide the input schema when creating one of the specialized evaluators defined above.  In cases when the standard eval method is going to be called, binding can still be done manually using `BindReferences`.  There are a few reasons for this change:  First, there were many operators where it just didn't work before.  For example, operators with more than one child, and operators like aggregation that do significant rewriting of the expression. Second, the semantics of equality with `BoundReferences` are broken.  Specifically, we have had a few bugs where partitioning breaks because of the binding.
       - A copy of the current `SQLContext` is automatically propagated to all `SparkPlan` nodes by the query planner.  Before this was done ad-hoc for the nodes that needed this.  However, this required a lot of boilerplate as one had to always remember to make it `transient` and also had to modify the `otherCopyArgs`.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #993 from marmbrus/newCodeGen and squashes the following commits:
      
      96ef82c [Michael Armbrust] Merge remote-tracking branch 'apache/master' into newCodeGen
      f34122d [Michael Armbrust] Merge remote-tracking branch 'apache/master' into newCodeGen
      67b1c48 [Michael Armbrust] Use conf variable in SQLConf object
      4bdc42c [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      41a40c9 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      de22aac [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      fed3634 [Michael Armbrust] Inspectors are not serializable.
      ef8d42b [Michael Armbrust] comments
      533fdfd [Michael Armbrust] More logging of expression rewriting for GeneratedAggregate.
      3cd773e [Michael Armbrust] Allow codegen for Generate.
      64b2ee1 [Michael Armbrust] Implement copy
      3587460 [Michael Armbrust] Drop unused string builder function.
      9cce346 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      1a61293 [Michael Armbrust] Address review comments.
      0672e8a [Michael Armbrust] Address comments.
      1ec2d6e [Michael Armbrust] Address comments
      033abc6 [Michael Armbrust] off by default
      4771fab [Michael Armbrust] Docs, more test coverage.
      d30fee2 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      d2ad5c5 [Michael Armbrust] Refactor putting SQLContext into SparkPlan. Fix ordering, other test cases.
      be2cd6b [Michael Armbrust] WIP: Remove old method for reference binding, more work on configuration.
      bc88ecd [Michael Armbrust] Style
      6cc97ca [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      4220f1e [Michael Armbrust] Better config, docs, etc.
      ca6cc6b [Michael Armbrust] WIP
      9d67d85 [Michael Armbrust] Fix hive planner
      fc522d5 [Michael Armbrust] Hook generated aggregation in to the planner.
      e742640 [Michael Armbrust] Remove unneeded changes and code.
      675e679 [Michael Armbrust] Upgrade paradise.
      0093376 [Michael Armbrust] Comment / indenting cleanup.
      d81f998 [Michael Armbrust] include schema for binding.
      0e889e8 [Michael Armbrust] Use typeOf instead tq
      f623ffd [Michael Armbrust] Quiet logging from test suite.
      efad14f [Michael Armbrust] Remove some half finished functions.
      92e74a4 [Michael Armbrust] add overrides
      a2b5408 [Michael Armbrust] WIP: Code generation with scala reflection.
      84467468
    • Josh Rosen's avatar
      [SPARK-2305] [PySpark] Update Py4J to version 0.8.2.1 · 22649b6c
      Josh Rosen authored
      Author: Josh Rosen <joshrosen@apache.org>
      
      Closes #1626 from JoshRosen/SPARK-2305 and squashes the following commits:
      
      03fb283 [Josh Rosen] Update Py4J to version 0.8.2.1.
      22649b6c
    • Michael Armbrust's avatar
      [SPARK-2631][SQL] Use SQLConf to configure in-memory columnar caching · 86534d0f
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1638 from marmbrus/cachedConfig and squashes the following commits:
      
      2362082 [Michael Armbrust] Use SQLConf to configure in-memory columnar caching
      86534d0f
    • Michael Armbrust's avatar
      [SPARK-2716][SQL] Don't check resolved for having filters. · 39b81931
      Michael Armbrust authored
      For queries like `... HAVING COUNT(*) > 9` the expression is always resolved since it contains no attributes.  This was causing us to avoid doing the Having clause aggregation rewrite.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1640 from marmbrus/havingNoRef and squashes the following commits:
      
      92d3901 [Michael Armbrust] Don't check resolved for having filters.
      39b81931
    • Patrick Wendell's avatar
      MAINTENANCE: Automated closing of pull requests. · 2c356665
      Patrick Wendell authored
      This commit exists to close the following pull requests on Github:
      
      Closes #740 (close requested by 'rxin')
      Closes #647 (close requested by 'rxin')
      Closes #1383 (close requested by 'rxin')
      Closes #1485 (close requested by 'pwendell')
      Closes #693 (close requested by 'rxin')
      Closes #478 (close requested by 'JoshRosen')
      2c356665
    • Zongheng Yang's avatar
      [SPARK-2393][SQL] Cost estimation optimization framework for Catalyst logical plans & sample usage. · c7db274b
      Zongheng Yang authored
      The idea is that every Catalyst logical plan gets hold of a Statistics class, the usage of which provides useful estimations on various statistics. See the implementations of `MetastoreRelation`.
      
      This patch also includes several usages of the estimation interface in the planner. For instance, we now use physical table sizes from the estimate interface to convert an equi-join to a broadcast join (when doing so is beneficial, as determined by a size threshold).
      
      Finally, there are a couple minor accompanying changes including:
      - Remove the not-in-use `BaseRelation`.
      - Make SparkLogicalPlan take a `SQLContext` in the second param list.
      
      Author: Zongheng Yang <zongheng.y@gmail.com>
      
      Closes #1238 from concretevitamin/estimates and squashes the following commits:
      
      329071d [Zongheng Yang] Address review comments; turn config name from string to field in SQLConf.
      8663e84 [Zongheng Yang] Use BigInt for stat; for logical leaves, by default throw an exception.
      2f2fb89 [Zongheng Yang] Fix statistics for SparkLogicalPlan.
      9951305 [Zongheng Yang] Remove childrenStats.
      16fc60a [Zongheng Yang] Avoid calling statistics on plans if auto join conversion is disabled.
      8bd2816 [Zongheng Yang] Add a note on performance of statistics.
      6e594b8 [Zongheng Yang] Get size info from metastore for MetastoreRelation.
      01b7a3e [Zongheng Yang] Update scaladoc for a field and move it to @param section.
      549061c [Zongheng Yang] Remove numTuples in Statistics for now.
      729a8e2 [Zongheng Yang] Update docs to be more explicit.
      573e644 [Zongheng Yang] Remove singleton SQLConf and move back `settings` to the trait.
      2d99eb5 [Zongheng Yang] {Cleanup, use synchronized in, enrich} StatisticsSuite.
      ca5b825 [Zongheng Yang] Inject SQLContext into SparkLogicalPlan, removing SQLConf mixin from it.
      43d38a6 [Zongheng Yang] Revert optimization for BroadcastNestedLoopJoin (this fixes tests).
      0ef9e5b [Zongheng Yang] Use multiplication instead of sum for default estimates.
      4ef0d26 [Zongheng Yang] Make Statistics a case class.
      3ba8f3e [Zongheng Yang] Add comment.
      e5bcf5b [Zongheng Yang] Fix optimization conditions & update scala docs to explain.
      7d9216a [Zongheng Yang] Apply estimation to planning ShuffleHashJoin & BroadcastNestedLoopJoin.
      73cde01 [Zongheng Yang] Move SQLConf back. Assign default sizeInBytes to SparkLogicalPlan.
      73412be [Zongheng Yang] Move SQLConf to Catalyst & add default val for sizeInBytes.
      7a60ab7 [Zongheng Yang] s/Estimates/Statistics, s/cardinality/numTuples.
      de3ae13 [Zongheng Yang] Add parquetAfter() properly in test.
      dcff9bd [Zongheng Yang] Cleanups.
      84301a4 [Zongheng Yang] Refactors.
      5bf5586 [Zongheng Yang] Typo.
      56a8e6e [Zongheng Yang] Prototype impl of estimations for Catalyst logical plans.
      c7db274b
    • Doris Xin's avatar
      [SPARK-2082] stratified sampling in PairRDDFunctions that guarantees exact sample size · dc965364
      Doris Xin authored
      Implemented stratified sampling that guarantees exact sample size using ScaRSR with two passes over the RDD for sampling without replacement and three passes for sampling with replacement.
      
      Author: Doris Xin <doris.s.xin@gmail.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1025 from dorx/stratified and squashes the following commits:
      
      245439e [Doris Xin] moved minSamplingRate to getUpperBound
      eaf5771 [Doris Xin] bug fixes.
      17a381b [Doris Xin] fixed a merge issue and a failed unit
      ea7d27f [Doris Xin] merge master
      b223529 [Xiangrui Meng] use approx bounds for poisson fix poisson mean for waitlisting add unit tests for Java
      b3013a4 [Xiangrui Meng] move math3 back to test scope
      eecee5f [Doris Xin] Merge branch 'master' into stratified
      f4c21f3 [Doris Xin] Reviewer comments
      a10e68d [Doris Xin] style fix
      a2bf756 [Doris Xin] Merge branch 'master' into stratified
      680b677 [Doris Xin] use mapPartitionWithIndex instead
      9884a9f [Doris Xin] style fix
      bbfb8c9 [Doris Xin] Merge branch 'master' into stratified
      ee9d260 [Doris Xin] addressed reviewer comments
      6b5b10b [Doris Xin] Merge branch 'master' into stratified
      254e03c [Doris Xin] minor fixes and Java API.
      4ad516b [Doris Xin] remove unused imports from PairRDDFunctions
      bd9dc6e [Doris Xin] unit bug and style violation fixed
      1fe1cff [Doris Xin] Changed fractionByKey to a map to enable arg check
      944a10c [Doris Xin] [SPARK-2145] Add lower bound on sampling rate
      0214a76 [Doris Xin] cleanUp
      90d94c0 [Doris Xin] merge master
      9e74ab5 [Doris Xin] Separated out most of the logic in sampleByKey
      7327611 [Doris Xin] merge master
      50581fc [Doris Xin] added a TODO for logging in python
      46f6c8c [Doris Xin] fixed the NPE caused by closures being cleaned before being passed into the aggregate function
      7e1a481 [Doris Xin] changed the permission on SamplingUtil
      1d413ce [Doris Xin] fixed checkstyle issues
      9ee94ee [Doris Xin] [SPARK-2082] stratified sampling in PairRDDFunctions that guarantees exact sample size
      e3fd6a6 [Doris Xin] Merge branch 'master' into takeSample
      7cab53a [Doris Xin] fixed import bug in rdd.py
      ffea61a [Doris Xin] SPARK-1939: Refactor takeSample method in RDD
      1441977 [Doris Xin] SPARK-1939 Refactor takeSample method in RDD to use ScaSRS
      dc965364
    • Davies Liu's avatar
      [SPARK-2674] [SQL] [PySpark] support datetime type for SchemaRDD · f0d880e2
      Davies Liu authored
      Datetime and time in Python will be converted into java.util.Calendar after serialization, it will be converted into java.sql.Timestamp during inferSchema().
      
      In javaToPython(), Timestamp will be converted into Calendar, then be converted into datetime in Python after pickling.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1601 from davies/date and squashes the following commits:
      
      f0599b0 [Davies Liu] remove tests for sets and tuple in sql, fix list of list
      c9d607a [Davies Liu] convert datetype for runtime
      709d40d [Davies Liu] remove brackets
      96db384 [Davies Liu] support datetime type for SchemaRDD
      f0d880e2
    • Yin Huai's avatar
      [SPARK-2730][SQL] When retrieving a value from a Map, GetItem evaluates key twice · e3643485
      Yin Huai authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-2730
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1637 from yhuai/SPARK-2730 and squashes the following commits:
      
      1a9f24e [Yin Huai] Remove unnecessary key evaluation.
      e3643485
    • Daoyuan's avatar
      [SQL]change some test lists · 0c5c6a63
      Daoyuan authored
      1. there's no `hook_context.q` but a `hook_context_cs.q` in query folder
      2. there's no `compute_stats_table.q` in query folder
      3. there's no `having1.q` in query folder
      4. `udf_E` and `udf_PI` appear twice in white list
      
      Author: Daoyuan <daoyuan.wang@intel.com>
      
      Closes #1634 from adrian-wang/testcases and squashes the following commits:
      
      d7482ce [Daoyuan] change some test lists
      0c5c6a63
    • Hari Shreedharan's avatar
      [STREAMING] SPARK-1729. Make Flume pull data from source, rather than the current pu... · 800ecff4
      Hari Shreedharan authored
      ...sh model
      
      Currently Spark uses Flume's internal Avro Protocol to ingest data from Flume. If the executor running the
      receiver fails, it currently has to be restarted on the same node to be able to receive data.
      
      This commit adds a new Sink which can be deployed to a Flume agent. This sink can be polled by a new
      DStream that is also included in this commit. This model ensures that data can be pulled into Spark from
      Flume even if the receiver is restarted on a new node. This also allows the receiver to receive data on
      multiple threads for better performance.
      
      Author: Hari Shreedharan <harishreedharan@gmail.com>
      Author: Hari Shreedharan <hshreedharan@apache.org>
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Author: harishreedharan <hshreedharan@cloudera.com>
      
      Closes #807 from harishreedharan/master and squashes the following commits:
      
      e7f70a3 [Hari Shreedharan] Merge remote-tracking branch 'asf-git/master'
      96cfb6f [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      e48d785 [Hari Shreedharan] Documenting flume-sink being ignored for Mima checks.
      5f212ce [Hari Shreedharan] Ignore Spark Sink from mima.
      981bf62 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      7a1bc6e [Hari Shreedharan] Fix SparkBuild.scala
      a082eb3 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      1f47364 [Hari Shreedharan] Minor fixes.
      73d6f6d [Hari Shreedharan] Cleaned up tests a bit. Added some docs in multiple places.
      65b76b4 [Hari Shreedharan] Fixing the unit test.
      e59cc20 [Hari Shreedharan] Use SparkFlumeEvent instead of the new type. Also, Flume Polling Receiver now uses the store(ArrayBuffer) method.
      f3c99d1 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      3572180 [Hari Shreedharan] Adding a license header, making Jenkins happy.
      799509f [Hari Shreedharan] Fix a compile issue.
      3c5194c [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      d248d22 [harishreedharan] Merge pull request #1 from tdas/flume-polling
      10b6214 [Tathagata Das] Changed public API, changed sink package, and added java unit test to make sure Java API is callable from Java.
      1edc806 [Hari Shreedharan] SPARK-1729. Update logging in Spark Sink.
      8c00289 [Hari Shreedharan] More debug messages
      393bd94 [Hari Shreedharan] SPARK-1729. Use LinkedBlockingQueue instead of ArrayBuffer to keep track of connections.
      120e2a1 [Hari Shreedharan] SPARK-1729. Some test changes and changes to utils classes.
      9fd0da7 [Hari Shreedharan] SPARK-1729. Use foreach instead of map for all Options.
      8136aa6 [Hari Shreedharan] Adding TransactionProcessor to map on returning batch of data
      86aa274 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      205034d [Hari Shreedharan] Merging master in
      4b0c7fc [Hari Shreedharan] FLUME-1729. New Flume-Spark integration.
      bda01fc [Hari Shreedharan] FLUME-1729. Flume-Spark integration.
      0d69604 [Hari Shreedharan] FLUME-1729. Better Flume-Spark integration.
      3c23c18 [Hari Shreedharan] SPARK-1729. New Spark-Flume integration.
      70bcc2a [Hari Shreedharan] SPARK-1729. New Flume-Spark integration.
      d6fa3aa [Hari Shreedharan] SPARK-1729. New Flume-Spark integration.
      e7da512 [Hari Shreedharan] SPARK-1729. Fixing import order
      9741683 [Hari Shreedharan] SPARK-1729. Fixes based on review.
      c604a3c [Hari Shreedharan] SPARK-1729. Optimize imports.
      0f10788 [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      87775aa [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      8df37e4 [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      03d6c1c [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      08176ad [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      d24d9d4 [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      6d6776a [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      800ecff4
    • Aaron Staple's avatar
      Minor indentation and comment typo fixes. · fc4d0570
      Aaron Staple authored
      Author: Aaron Staple <astaple@gmail.com>
      
      Closes #1630 from staple/minor and squashes the following commits:
      
      6f295a2 [Aaron Staple] Fix typos in comment about ExprId.
      8566467 [Aaron Staple] Fix off by one column indentation in SqlParser.
      fc4d0570
    • Xiangrui Meng's avatar
      [SPARK-2174][MLLIB] treeReduce and treeAggregate · 20424dad
      Xiangrui Meng authored
      In `reduce` and `aggregate`, the driver node spends linear time on the number of partitions. It becomes a bottleneck when there are many partitions and the data from each partition is big.
      
      SPARK-1485 (#506) tracks the progress of implementing AllReduce on Spark. I did several implementations including butterfly, reduce + broadcast, and treeReduce + broadcast. treeReduce + BT broadcast seems to be right way to go for Spark. Using binary tree may introduce some overhead in communication, because the driver still need to coordinate on data shuffling. In my experiments, n -> sqrt(n) -> 1 gives the best performance in general, which is why I set "depth = 2" in MLlib algorithms. But it certainly needs more testing.
      
      I left `treeReduce` and `treeAggregate` public for easy testing. Some numbers from a test on 32-node m3.2xlarge cluster.
      
      code:
      
      ~~~
      import breeze.linalg._
      import org.apache.log4j._
      
      Logger.getRootLogger.setLevel(Level.OFF)
      
      for (n <- Seq(1, 10, 100, 1000, 10000, 100000, 1000000)) {
        val vv = sc.parallelize(0 until 1024, 1024).map(i => DenseVector.zeros[Double](n))
        var start = System.nanoTime(); vv.treeReduce(_ + _, 2); println((System.nanoTime() - start) / 1e9)
        start = System.nanoTime(); vv.reduce(_ + _); println((System.nanoTime() - start) / 1e9)
      }
      ~~~
      
      out:
      
      | n | treeReduce(,2) | reduce |
      |---|---------------------|-----------|
      | 10 | 0.215538731 | 0.204206899 |
      | 100 | 0.278405907 | 0.205732582 |
      | 1000 | 0.208972182 | 0.214298272 |
      | 10000 | 0.194792071 | 0.349353687 |
      | 100000 | 0.347683285 | 6.086671892 |
      | 1000000 | 2.589350682 | 66.572906702 |
      
      CC: @pwendell
      
      This is clearly more scalable than the default implementation. My question is whether we should use this implementation in `reduce` and `aggregate` or put them as separate methods. The concern is that users may use `reduce` and `aggregate` as collect, where having multiple stages doesn't reduce the data size. However, in this case, `collect` is more appropriate.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1110 from mengxr/tree and squashes the following commits:
      
      c6cd267 [Xiangrui Meng] make depth default to 2
      b04b96a [Xiangrui Meng] address comments
      9bcc5d3 [Xiangrui Meng] add depth for readability
      7495681 [Xiangrui Meng] fix compile error
      142a857 [Xiangrui Meng] merge master
      d58a087 [Xiangrui Meng] move treeReduce and treeAggregate to mllib
      8a2a59c [Xiangrui Meng] Merge branch 'master' into tree
      be6a88a [Xiangrui Meng] use treeAggregate in mllib
      0f94490 [Xiangrui Meng] add docs
      eb71c33 [Xiangrui Meng] add treeReduce
      fe42a5e [Xiangrui Meng] add treeAggregate
      20424dad
    • Reynold Xin's avatar
      [SPARK-2726] and [SPARK-2727] Remove SortOrder and do in-place sort. · 96ba04bb
      Reynold Xin authored
      The pull request includes two changes:
      
      1. Removes SortOrder introduced by SPARK-2125. The key ordering already includes the SortOrder information since an Ordering can be reverse. This is similar to Java's Comparator interface. Rarely does an API accept both a Comparator as well as a SortOrder.
      
      2. Replaces the sortWith call in HashShuffleReader with an in-place quick sort.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1631 from rxin/sortOrder and squashes the following commits:
      
      c9d37e1 [Reynold Xin] [SPARK-2726] and [SPARK-2727] Remove SortOrder and do in-place sort.
      96ba04bb
    • Davies Liu's avatar
      [SPARK-791] [PySpark] fix pickle itemgetter with cloudpickle · 92ef0262
      Davies Liu authored
      fix the problem with pickle operator.itemgetter with multiple index.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1627 from davies/itemgetter and squashes the following commits:
      
      aabd7fa [Davies Liu] fix pickle itemgetter with cloudpickle
      92ef0262
    • Davies Liu's avatar
      [SPARK-2580] [PySpark] keep silent in worker if JVM close the socket · ccd5ab5f
      Davies Liu authored
      During rdd.take(n), JVM will close the socket if it had got enough data, the Python worker should keep silent in this case.
      
      In the same time, the worker should not print the trackback into stderr if it send the traceback to JVM successfully.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1625 from davies/error and squashes the following commits:
      
      4fbcc6d [Davies Liu] disable log4j during testing when exception is expected.
      cc14202 [Davies Liu] keep silent in worker if JVM close the socket
      ccd5ab5f
  2. Jul 28, 2014
    • Yadong Qi's avatar
      Excess judgment · 16ef4d11
      Yadong Qi authored
      Author: Yadong Qi <qiyadong2010@gmail.com>
      
      Closes #1629 from watermen/bug-fix2 and squashes the following commits:
      
      59b7237 [Yadong Qi] Update HiveQl.scala
      16ef4d11
    • Aaron Davidson's avatar
      Use commons-lang3 in SignalLogger rather than commons-lang · 39ab87b9
      Aaron Davidson authored
      Spark only transitively depends on the latter, based on the Hadoop version.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #1621 from aarondav/lang3 and squashes the following commits:
      
      93c93bf [Aaron Davidson] Use commons-lang3 in SignalLogger rather than commons-lang
      39ab87b9
    • Cheng Lian's avatar
      [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) · a7a9d144
      Cheng Lian authored
      JIRA issue: [SPARK-2410](https://issues.apache.org/jira/browse/SPARK-2410)
      
      Another try for #1399 & #1600. Those two PR breaks Jenkins builds because we made a separate profile `hive-thriftserver` in sub-project `assembly`, but the `hive-thriftserver` module is defined outside the `hive-thriftserver` profile. Thus every time a pull request that doesn't touch SQL code will also execute test suites defined in `hive-thriftserver`, but tests fail because related .class files are not included in the assembly jar.
      
      In the most recent commit, module `hive-thriftserver` is moved into its own profile to fix this problem. All previous commits are squashed for clarity.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1620 from liancheng/jdbc-with-maven-fix and squashes the following commits:
      
      629988e [Cheng Lian] Moved hive-thriftserver module definition into its own profile
      ec3c7a7 [Cheng Lian] Cherry picked the Hive Thrift server
      a7a9d144
    • DB Tsai's avatar
      [SPARK-2479][MLlib] Comparing floating-point numbers using relative error in UnitTests · 255b56f9
      DB Tsai authored
      Floating point math is not exact, and most floating-point numbers end up being slightly imprecise due to rounding errors.
      
      Simple values like 0.1 cannot be precisely represented using binary floating point numbers, and the limited precision of floating point numbers means that slight changes in the order of operations or the precision of intermediates can change the result.
      
      That means that comparing two floats to see if they are equal is usually not what we want. As long as this imprecision stays small, it can usually be ignored.
      
      Based on discussion in the community, we have implemented two different APIs for relative tolerance, and absolute tolerance. It makes sense that test writers should know which one they need depending on their circumstances.
      
      Developers also need to explicitly specify the eps, and there is no default value which will sometimes cause confusion.
      
      When comparing against zero using relative tolerance, a exception will be raised to warn users that it's meaningless.
      
      For relative tolerance, users can now write
      
          assert(23.1 ~== 23.52 relTol 0.02)
          assert(23.1 ~== 22.74 relTol 0.02)
          assert(23.1 ~= 23.52 relTol 0.02)
          assert(23.1 ~= 22.74 relTol 0.02)
          assert(!(23.1 !~= 23.52 relTol 0.02))
          assert(!(23.1 !~= 22.74 relTol 0.02))
      
          // This will throw exception with the following message.
          // "Did not expect 23.1 and 23.52 to be within 0.02 using relative tolerance."
          assert(23.1 !~== 23.52 relTol 0.02)
      
          // "Expected 23.1 and 22.34 to be within 0.02 using relative tolerance."
          assert(23.1 ~== 22.34 relTol 0.02)
      
      For absolute error,
      
          assert(17.8 ~== 17.99 absTol 0.2)
          assert(17.8 ~== 17.61 absTol 0.2)
          assert(17.8 ~= 17.99 absTol 0.2)
          assert(17.8 ~= 17.61 absTol 0.2)
          assert(!(17.8 !~= 17.99 absTol 0.2))
          assert(!(17.8 !~= 17.61 absTol 0.2))
      
          // This will throw exception with the following message.
          // "Did not expect 17.8 and 17.99 to be within 0.2 using absolute error."
          assert(17.8 !~== 17.99 absTol 0.2)
      
          // "Expected 17.8 and 17.59 to be within 0.2 using absolute error."
          assert(17.8 ~== 17.59 absTol 0.2)
      
      Authors:
        DB Tsai <dbtsaialpinenow.com>
        Marek Kolodziej <marekalpinenow.com>
      
      Author: DB Tsai <dbtsai@alpinenow.com>
      
      Closes #1425 from dbtsai/SPARK-2479_comparing_floating_point and squashes the following commits:
      
      8c7cbcc [DB Tsai] Alpine Data Labs
      255b56f9
    • Cheng Hao's avatar
      [SPARK-2523] [SQL] Hadoop table scan bug fixing · 2b8d89e3
      Cheng Hao authored
      In HiveTableScan.scala, ObjectInspector was created for all of the partition based records, which probably causes ClassCastException if the object inspector is not identical among table & partitions.
      
      This is the follow up with:
      https://github.com/apache/spark/pull/1408
      https://github.com/apache/spark/pull/1390
      
      I've run a micro benchmark in my local with 15000000 records totally, and got the result as below:
      
      With This Patch  |  Partition-Based Table  |  Non-Partition-Based Table
      ------------ | ------------- | -------------
      No  |  1927 ms  |  1885 ms
      Yes  | 1541 ms  |  1524 ms
      
      It showed this patch will also improve the performance.
      
      PS:  the benchmark code is also attached. (thanks liancheng )
      ```
      package org.apache.spark.sql.hive
      
      import org.apache.spark.SparkContext
      import org.apache.spark.SparkConf
      import org.apache.spark.sql._
      
      object HiveTableScanPrepare extends App {
        case class Record(key: String, value: String)
      
        val sparkContext = new SparkContext(
          new SparkConf()
            .setMaster("local")
            .setAppName(getClass.getSimpleName.stripSuffix("$")))
      
        val hiveContext = new LocalHiveContext(sparkContext)
      
        val rdd = sparkContext.parallelize((1 to 3000000).map(i => Record(s"$i", s"val_$i")))
      
        import hiveContext._
      
        hql("SHOW TABLES")
        hql("DROP TABLE if exists part_scan_test")
        hql("DROP TABLE if exists scan_test")
        hql("DROP TABLE if exists records")
        rdd.registerAsTable("records")
      
        hql("""CREATE TABLE part_scan_test (key STRING, value STRING) PARTITIONED BY (part1 string, part2 STRING)
                       | ROW FORMAT SERDE
                       | 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
                       | STORED AS RCFILE
                     """.stripMargin)
        hql("""CREATE TABLE scan_test (key STRING, value STRING)
                       | ROW FORMAT SERDE
                       | 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
                       | STORED AS RCFILE
                     """.stripMargin)
      
        for (part1 <- 2000 until 2001) {
          for (part2 <- 1 to 5) {
            hql(s"""from records
                       | insert into table part_scan_test PARTITION (part1='$part1', part2='2010-01-$part2')
                       | select key, value
                     """.stripMargin)
            hql(s"""from records
                       | insert into table scan_test select key, value
                     """.stripMargin)
          }
        }
      }
      
      object HiveTableScanTest extends App {
        val sparkContext = new SparkContext(
          new SparkConf()
            .setMaster("local")
            .setAppName(getClass.getSimpleName.stripSuffix("$")))
      
        val hiveContext = new LocalHiveContext(sparkContext)
      
        import hiveContext._
      
        hql("SHOW TABLES")
        val part_scan_test = hql("select key, value from part_scan_test")
        val scan_test = hql("select key, value from scan_test")
      
        val r_part_scan_test = (0 to 5).map(i => benchmark(part_scan_test))
        val r_scan_test = (0 to 5).map(i => benchmark(scan_test))
        println("Scanning Partition-Based Table")
        r_part_scan_test.foreach(printResult)
        println("Scanning Non-Partition-Based Table")
        r_scan_test.foreach(printResult)
      
        def printResult(result: (Long, Long)) {
          println(s"Duration: ${result._1} ms Result: ${result._2}")
        }
      
        def benchmark(srdd: SchemaRDD) = {
          val begin = System.currentTimeMillis()
          val result = srdd.count()
          val end = System.currentTimeMillis()
          ((end - begin), result)
        }
      }
      ```
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #1439 from chenghao-intel/hadoop_table_scan and squashes the following commits:
      
      888968f [Cheng Hao] Fix issues in code style
      27540ba [Cheng Hao] Fix the TableScan Bug while partition serde differs
      40a24a7 [Cheng Hao] Add Unit Test
      2b8d89e3
    • Josh Rosen's avatar
      [SPARK-1550] [PySpark] Allow SparkContext creation after failed attempts · a7d145e9
      Josh Rosen authored
      This addresses a PySpark issue where a failed attempt to construct SparkContext would prevent any future SparkContext creation.
      
      Author: Josh Rosen <joshrosen@apache.org>
      
      Closes #1606 from JoshRosen/SPARK-1550 and squashes the following commits:
      
      ec7fadc [Josh Rosen] [SPARK-1550] [PySpark] Allow SparkContext creation after failed attempts
      a7d145e9
  3. Jul 27, 2014
    • Rahul Singhal's avatar
      SPARK-2651: Add maven scalastyle plugin · d7eac4c3
      Rahul Singhal authored
      Can be run as: "mvn scalastyle:check"
      
      Author: Rahul Singhal <rahul.singhal@guavus.com>
      
      Closes #1550 from rahulsinghaliitd/SPARK-2651 and squashes the following commits:
      
      53748dd [Rahul Singhal] SPARK-2651: Add maven scalastyle plugin
      d7eac4c3
    • Patrick Wendell's avatar
      Revert "[SPARK-2410][SQL] Merging Hive Thrift/JDBC server" · e5bbce9a
      Patrick Wendell authored
      This reverts commit f6ff2a61.
      e5bbce9a
    • Doris Xin's avatar
      [SPARK-2514] [mllib] Random RDD generator · 81fcdd22
      Doris Xin authored
      Utilities for generating random RDDs.
      
      RandomRDD and RandomVectorRDD are created instead of using `sc.parallelize(range:Range)` because `Range` objects in Scala can only have `size <= Int.MaxValue`.
      
      The object `RandomRDDGenerators` can be transformed into a generator class to reduce the number of auxiliary methods for optional arguments.
      
      Author: Doris Xin <doris.s.xin@gmail.com>
      
      Closes #1520 from dorx/randomRDD and squashes the following commits:
      
      01121ac [Doris Xin] reviewer comments
      6bf27d8 [Doris Xin] Merge branch 'master' into randomRDD
      a8ea92d [Doris Xin] Reviewer comments
      063ea0b [Doris Xin] Merge branch 'master' into randomRDD
      aec68eb [Doris Xin] newline
      bc90234 [Doris Xin] units passed.
      d56cacb [Doris Xin] impl with RandomRDD
      92d6f1c [Doris Xin] solution for Cloneable
      df5bcff [Doris Xin] Merge branch 'generator' into randomRDD
      f46d928 [Doris Xin] WIP
      49ed20d [Doris Xin] alternative poisson distribution generator
      7cb0e40 [Doris Xin] fix for data inconsistency
      8881444 [Doris Xin] RandomRDDGenerator: initial design
      81fcdd22
    • Andrew Or's avatar
      [SPARK-1777] Prevent OOMs from single partitions · ecf30ee7
      Andrew Or authored
      **Problem.** When caching, we currently unroll the entire RDD partition before making sure we have enough free memory. This is a common cause for OOMs especially when (1) the BlockManager has little free space left in memory, and (2) the partition is large.
      
      **Solution.** We maintain a global memory pool of `M` bytes shared across all threads, similar to the way we currently manage memory for shuffle aggregation. Then, while we unroll each partition, periodically check if there is enough space to continue. If not, drop enough RDD blocks to ensure we have at least `M` bytes to work with, then try again. If we still don't have enough space to unroll the partition, give up and drop the block to disk directly if applicable.
      
      **New configurations.**
      - `spark.storage.bufferFraction` - the value of `M` as a fraction of the storage memory. (default: 0.2)
      - `spark.storage.safetyFraction` - a margin of safety in case size estimation is slightly off. This is the equivalent of the existing `spark.shuffle.safetyFraction`. (default 0.9)
      
      For more detail, see the [design document](https://issues.apache.org/jira/secure/attachment/12651793/spark-1777-design-doc.pdf). Tests pending for performance and memory usage patterns.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1165 from andrewor14/them-rdd-memories and squashes the following commits:
      
      e77f451 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      c7c8832 [Andrew Or] Simplify logic + update a few comments
      269d07b [Andrew Or] Very minor changes to tests
      6645a8a [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      b7e165c [Andrew Or] Add new tests for unrolling blocks
      f12916d [Andrew Or] Slightly clean up tests
      71672a7 [Andrew Or] Update unrollSafely tests
      369ad07 [Andrew Or] Correct ensureFreeSpace and requestMemory behavior
      f4d035c [Andrew Or] Allow one thread to unroll multiple blocks
      a66fbd2 [Andrew Or] Rename a few things + update comments
      68730b3 [Andrew Or] Fix weird scalatest behavior
      e40c60d [Andrew Or] Fix MIMA excludes
      ff77aa1 [Andrew Or] Fix tests
      1a43c06 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      b9a6eee [Andrew Or] Simplify locking behavior on unrollMemoryMap
      ed6cda4 [Andrew Or] Formatting fix (super minor)
      f9ff82e [Andrew Or] putValues -> putIterator + putArray
      beb368f [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      8448c9b [Andrew Or] Fix tests
      a49ba4d [Andrew Or] Do not expose unroll memory check period
      69bc0a5 [Andrew Or] Always synchronize on putLock before unrollMemoryMap
      3f5a083 [Andrew Or] Simplify signature of ensureFreeSpace
      dce55c8 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      8288228 [Andrew Or] Synchronize put and unroll properly
      4f18a3d [Andrew Or] bufferFraction -> unrollFraction
      28edfa3 [Andrew Or] Update a few comments / log messages
      728323b [Andrew Or] Do not synchronize every 1000 elements
      5ab2329 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      129c441 [Andrew Or] Fix bug: Use toArray rather than array
      9a65245 [Andrew Or] Update a few comments + minor control flow changes
      57f8d85 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      abeae4f [Andrew Or] Add comment clarifying the MEMORY_AND_DISK case
      3dd96aa [Andrew Or] AppendOnlyBuffer -> Vector (+ a few small changes)
      f920531 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      0871835 [Andrew Or] Add an effective storage level interface to BlockManager
      64e7d4c [Andrew Or] Add/modify a few comments (minor)
      8af2f35 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      4f4834e [Andrew Or] Use original storage level for blocks dropped to disk
      ecc8c2d [Andrew Or] Fix binary incompatibility
      24185ea [Andrew Or] Avoid dropping a block back to disk if reading from disk
      2b7ee66 [Andrew Or] Fix bug in SizeTracking*
      9b9a273 [Andrew Or] Fix tests
      20eb3e5 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      649bdb3 [Andrew Or] Document spark.storage.bufferFraction
      a10b0e7 [Andrew Or] Add initial memory request threshold + rename a few things
      e9c3cb0 [Andrew Or] cacheMemoryMap -> unrollMemoryMap
      198e374 [Andrew Or] Unfold -> unroll
      0d50155 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      d9d02a8 [Andrew Or] Remove unused param in unfoldSafely
      ec728d8 [Andrew Or] Add tests for safe unfolding of blocks
      22b2209 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      078eb83 [Andrew Or] Add check for hasNext in PrimitiveVector.iterator
      0871535 [Andrew Or] Fix tests in BlockManagerSuite
      d68f31e [Andrew Or] Safely unfold blocks for all memory puts
      5961f50 [Andrew Or] Fix tests
      195abd7 [Andrew Or] Refactor: move unfold logic to MemoryStore
      1e82d00 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      3ce413e [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      d5dd3b4 [Andrew Or] Free buffer memory in finally
      ea02eec [Andrew Or] Fix tests
      b8e1d9c [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      a8704c1 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      e1b8b25 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      87aa75c [Andrew Or] Fix mima excludes again (typo)
      11eb921 [Andrew Or] Clarify comment (minor)
      50cae44 [Andrew Or] Remove now duplicate mima exclude
      7de5ef9 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      df47265 [Andrew Or] Fix binary incompatibility
      6d05a81 [Andrew Or] Merge branch 'master' of github.com:apache/spark into them-rdd-memories
      f94f5af [Andrew Or] Update a few comments (minor)
      776aec9 [Andrew Or] Prevent OOM if a single RDD partition is too large
      bbd3eea [Andrew Or] Fix CacheManagerSuite to use Array
      97ea499 [Andrew Or] Change BlockManager interface to use Arrays
      c12f093 [Andrew Or] Add SizeTrackingAppendOnlyBuffer and tests
      ecf30ee7
    • Cheng Lian's avatar
      [SPARK-2410][SQL] Merging Hive Thrift/JDBC server · f6ff2a61
      Cheng Lian authored
      (This is a replacement of #1399, trying to fix potential `HiveThriftServer2` port collision between parallel builds. Please refer to [these comments](https://github.com/apache/spark/pull/1399#issuecomment-50212572) for details.)
      
      JIRA issue: [SPARK-2410](https://issues.apache.org/jira/browse/SPARK-2410)
      
      Merging the Hive Thrift/JDBC server from [branch-1.0-jdbc](https://github.com/apache/spark/tree/branch-1.0-jdbc).
      
      Thanks chenghao-intel for his initial contribution of the Spark SQL CLI.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1600 from liancheng/jdbc and squashes the following commits:
      
      ac4618b [Cheng Lian] Uses random port for HiveThriftServer2 to avoid collision with parallel builds
      090beea [Cheng Lian] Revert changes related to SPARK-2678, decided to move them to another PR
      21c6cf4 [Cheng Lian] Updated Spark SQL programming guide docs
      fe0af31 [Cheng Lian] Reordered spark-submit options in spark-shell[.cmd]
      199e3fb [Cheng Lian] Disabled MIMA for hive-thriftserver
      1083e9d [Cheng Lian] Fixed failed test suites
      7db82a1 [Cheng Lian] Fixed spark-submit application options handling logic
      9cc0f06 [Cheng Lian] Starts beeline with spark-submit
      cfcf461 [Cheng Lian] Updated documents and build scripts for the newly added hive-thriftserver profile
      061880f [Cheng Lian] Addressed all comments by @pwendell
      7755062 [Cheng Lian] Adapts test suites to spark-submit settings
      40bafef [Cheng Lian] Fixed more license header issues
      e214aab [Cheng Lian] Added missing license headers
      b8905ba [Cheng Lian] Fixed minor issues in spark-sql and start-thriftserver.sh
      f975d22 [Cheng Lian] Updated docs for Hive compatibility and Shark migration guide draft
      3ad4e75 [Cheng Lian] Starts spark-sql shell with spark-submit
      a5310d1 [Cheng Lian] Make HiveThriftServer2 play well with spark-submit
      61f39f4 [Cheng Lian] Starts Hive Thrift server via spark-submit
      2c4c539 [Cheng Lian] Cherry picked the Hive Thrift server
      f6ff2a61
    • Cheng Lian's avatar
      [SPARK-2705][CORE] Fixed stage description in stage info page · 2bbf2353
      Cheng Lian authored
      Stage description should be a `String`, but was changed to an `Option[String]` by mistake:
      
      ![stage-desc-small](https://cloud.githubusercontent.com/assets/230655/3655611/f6d0b0f6-117b-11e4-83ed-71000dcd5009.png)
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1524 from liancheng/fix-stage-desc and squashes the following commits:
      
      3c69327 [Cheng Lian] Fixed stage description object type in Web UI stage table
      2bbf2353
    • Matei Zaharia's avatar
      SPARK-2684: Update ExternalAppendOnlyMap to take an iterator as input · 98570530
      Matei Zaharia authored
      This will decrease object allocation from the "update" closure used in map.changeValue.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #1607 from mateiz/spark-2684 and squashes the following commits:
      
      b7d89e6 [Matei Zaharia] Add insertAll for Iterables too, and fix some code style
      561fc97 [Matei Zaharia] Update ExternalAppendOnlyMap to take an iterator as input
      98570530
    • Doris Xin's avatar
      [SPARK-2679] [MLLib] Ser/De for Double · 3a69c72e
      Doris Xin authored
      Added a set of serializer/deserializer for Double in _common.py and PythonMLLibAPI in MLLib.
      
      Author: Doris Xin <doris.s.xin@gmail.com>
      
      Closes #1581 from dorx/doubleSerDe and squashes the following commits:
      
      86a85b3 [Doris Xin] Merge branch 'master' into doubleSerDe
      2bfe7a4 [Doris Xin] Removed magic byte
      ad4d0d9 [Doris Xin] removed a space in unit
      a9020bc [Doris Xin] units passed
      7dad9af [Doris Xin] WIP
      3a69c72e
    • Xiangrui Meng's avatar
      [SPARK-2361][MLLIB] Use broadcast instead of serializing data directly into task closure · aaf2b735
      Xiangrui Meng authored
      We saw task serialization problems with large feature dimension, which could be avoid if we don't serialize data directly into task but use broadcast variables. This PR uses broadcast in both training and prediction and adds tests to make sure the task size is small.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1427 from mengxr/broadcast-new and squashes the following commits:
      
      b9a1228 [Xiangrui Meng] style update
      b97c184 [Xiangrui Meng] minimal change to LBFGS
      9ebadcc [Xiangrui Meng] add task size test to RowMatrix
      9427bf0 [Xiangrui Meng] add task size tests to linear methods
      e0a5cf2 [Xiangrui Meng] add task size test to GD
      28a8411 [Xiangrui Meng] add test for NaiveBayes
      380778c [Xiangrui Meng] update KMeans test
      bccab92 [Xiangrui Meng] add task size test to LBFGS
      02103ba [Xiangrui Meng] remove print
      e73d68e [Xiangrui Meng] update tests for k-means
      174cb15 [Xiangrui Meng] use local-cluster for test with a small akka.frameSize
      1928a5a [Xiangrui Meng] add test for KMeans task size
      e00c2da [Xiangrui Meng] use broadcast in GD, KMeans
      010d076 [Xiangrui Meng] modify NaiveBayesModel and GLM to use broadcast
      aaf2b735
    • Matei Zaharia's avatar
      SPARK-2680: Lower spark.shuffle.memoryFraction to 0.2 by default · b547f69b
      Matei Zaharia authored
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #1593 from mateiz/spark-2680 and squashes the following commits:
      
      3c949c4 [Matei Zaharia] Lower spark.shuffle.memoryFraction to 0.2 by default
      b547f69b
  4. Jul 26, 2014
    • Josh Rosen's avatar
      [SPARK-2601] [PySpark] Fix Py4J error when transforming pickleFiles · ba46bbed
      Josh Rosen authored
      Similar to SPARK-1034, the problem was that Py4J didn’t cope well with the fake ClassTags used in the Java API.  It doesn’t look like there’s any reason why PythonRDD needs to take a ClassTag, since it just ignores the type of the previous RDD, so I removed the type parameter and we no longer pass ClassTags from Python.
      
      Author: Josh Rosen <joshrosen@apache.org>
      
      Closes #1605 from JoshRosen/spark-2601 and squashes the following commits:
      
      b68e118 [Josh Rosen] Fix Py4J error when transforming pickleFiles [SPARK-2601]
      ba46bbed
    • Reynold Xin's avatar
      [SPARK-2704] Name threads in ConnectionManager and mark them as daemon. · 12901643
      Reynold Xin authored
      handleMessageExecutor, handleReadWriteExecutor, and handleConnectExecutor are not marked as daemon and not named. I think there exists some condition in which Spark programs won't terminate because of this.
      
      Stack dump attached in https://issues.apache.org/jira/browse/SPARK-2704
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1604 from rxin/daemon and squashes the following commits:
      
      98d6a6c [Reynold Xin] [SPARK-2704] Name threads in ConnectionManager and mark them as daemon.
      12901643
    • bpaulin's avatar
      [SPARK-2279] Added emptyRDD method to Java API · c183b92c
      bpaulin authored
      Added emptyRDD method to Java API with tests.
      
      Author: bpaulin <bob@bobpaulin.com>
      
      Closes #1597 from bobpaulin/SPARK-2279 and squashes the following commits:
      
      5ad57c2 [bpaulin] [SPARK-2279] Added emptyRDD method to Java API
      c183b92c
    • Davies Liu's avatar
      [SPARK-2652] [PySpark] Turning some default configs for PySpark · 75663b57
      Davies Liu authored
      Add several default configs for PySpark, related to serialization in JVM.
      
      spark.serializer = org.apache.spark.serializer.KryoSerializer
      spark.serializer.objectStreamReset = 100
      spark.rdd.compress = True
      
      This will help to reduce the memory usage during RDD.partitionBy()
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1568 from davies/conf and squashes the following commits:
      
      cd316f1 [Davies Liu] remove duplicated line
      f71a355 [Davies Liu] rebase to master, add spark.rdd.compress = True
      8f63f45 [Davies Liu] Merge branch 'master' into conf
      8bc9f08 [Davies Liu] fix unittest
      c04a83d [Davies Liu] some default configs for PySpark
      75663b57
    • Hossein's avatar
      [SPARK-2696] Reduce default value of spark.serializer.objectStreamReset · 66f26a46
      Hossein authored
      The current default value of spark.serializer.objectStreamReset is 10,000.
      When trying to re-partition (e.g., to 64 partitions) a large file (e.g., 500MB), containing 1MB records, the serializer will cache 10000 x 1MB x 64 ~= 640 GB which will cause out of memory errors.
      
      This patch sets the default value to a more reasonable default value (100).
      
      Author: Hossein <hossein@databricks.com>
      
      Closes #1595 from falaki/objectStreamReset and squashes the following commits:
      
      650a935 [Hossein] Updated documentation
      1aa0df8 [Hossein] Reduce default value of spark.serializer.objectStreamReset
      66f26a46
    • Josh Rosen's avatar
      [SPARK-1458] [PySpark] Expose sc.version in Java and PySpark · cf3e9fd8
      Josh Rosen authored
      Author: Josh Rosen <joshrosen@apache.org>
      
      Closes #1596 from JoshRosen/spark-1458 and squashes the following commits:
      
      fdbb0bf [Josh Rosen] Add SparkContext.version to Python & Java [SPARK-1458]
      cf3e9fd8
  5. Jul 25, 2014
    • Michael Armbrust's avatar
      [SPARK-2659][SQL] Fix division semantics for hive · 89047912
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1557 from marmbrus/fixDivision and squashes the following commits:
      
      b85077f [Michael Armbrust] Fix unit tests.
      af98f29 [Michael Armbrust] Change DIV to long type
      0c29ae8 [Michael Armbrust] Fix division semantics for hive
      89047912
    • Reynold Xin's avatar
      Part of [SPARK-2456] Removed some HashMaps from DAGScheduler by storing information in Stage. · 9d8666ca
      Reynold Xin authored
      This is part of the scheduler cleanup/refactoring effort to make the scheduler code easier to maintain.
      
      @kayousterhout @markhamstra please take a look ...
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1561 from rxin/dagSchedulerHashMaps and squashes the following commits:
      
      1c44e15 [Reynold Xin] Clear pending tasks in submitMissingTasks.
      620a0d1 [Reynold Xin] Use filterKeys.
      5b54404 [Reynold Xin] Code review feedback.
      c1e9a1c [Reynold Xin] Removed some HashMaps from DAGScheduler by storing information in Stage.
      9d8666ca
Loading