Skip to content
Snippets Groups Projects
  1. Jul 23, 2015
    • Wenchen Fan's avatar
      [SPARK-9082] [SQL] [FOLLOW-UP] use `partition` in `PushPredicateThroughProject` · 52ef76de
      Wenchen Fan authored
      a follow up of https://github.com/apache/spark/pull/7446
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #7607 from cloud-fan/tmp and squashes the following commits:
      
      7106989 [Wenchen Fan] use `partition` in `PushPredicateThroughProject`
      52ef76de
    • Zhang, Liye's avatar
      [SPARK-9212] [CORE] upgrade Netty version to 4.0.29.Final · 26ed22ae
      Zhang, Liye authored
      related JIRA: [SPARK-9212](https://issues.apache.org/jira/browse/SPARK-9212) and [SPARK-8101](https://issues.apache.org/jira/browse/SPARK-8101)
      
      Author: Zhang, Liye <liye.zhang@intel.com>
      
      Closes #7562 from liyezhang556520/SPARK-9212 and squashes the following commits:
      
      1917729 [Zhang, Liye] SPARK-9212 upgrade Netty version to 4.0.29.Final
      26ed22ae
    • Reynold Xin's avatar
      Revert "[SPARK-8579] [SQL] support arbitrary object in UnsafeRow" · fb36397b
      Reynold Xin authored
      Reverts ObjectPool. As it stands, it has a few problems:
      
      1. ObjectPool doesn't work with spilling and memory accounting.
      2. I don't think in the long run the idea of an object pool is what we want to support, since it essentially goes back to unmanaged memory, and creates pressure on GC, and is hard to account for the total in memory size.
      3. The ObjectPool patch removed the specialized getters for strings and binary, and as a result, actually introduced branches when reading non primitive data types.
      
      If we do want to support arbitrary user defined types in the future, I think we can just add an object array in UnsafeRow, rather than relying on indirect memory addressing through a pool. We also need to pick execution strategies that are optimized for those, rather than keeping a lot of unserialized JVM objects in memory during aggregation.
      
      This is probably the hardest thing I had to revert in Spark, due to recent patches that also change the same part of the code. Would be great to get a careful look.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7591 from rxin/revert-object-pool and squashes the following commits:
      
      01db0bc [Reynold Xin] Scala style.
      eda89fc [Reynold Xin] Fixed describe.
      2967118 [Reynold Xin] Fixed accessor for JoinedRow.
      e3294eb [Reynold Xin] Merge branch 'master' into revert-object-pool
      657855f [Reynold Xin] Temp commit.
      c20f2c8 [Reynold Xin] Style fix.
      fe37079 [Reynold Xin] Revert "[SPARK-8579] [SQL] support arbitrary object in UnsafeRow"
      fb36397b
    • Josh Rosen's avatar
      [SPARK-9266] Prevent "managed memory leak detected" exception from masking original exception · ac3ae0f2
      Josh Rosen authored
      When a task fails with an exception and also fails to properly clean up its managed memory, the `spark.unsafe.exceptionOnMemoryLeak` memory leak detection mechanism's exceptions will mask the original exception that caused the task to fail. We should throw the memory leak exception only if no other exception occurred.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7603 from JoshRosen/SPARK-9266 and squashes the following commits:
      
      c268cb5 [Josh Rosen] Merge remote-tracking branch 'origin/master' into SPARK-9266
      c1f0167 [Josh Rosen] Fix the error masking problem
      448eae8 [Josh Rosen] Add regression test
      ac3ae0f2
    • Perinkulam I. Ganesh's avatar
      [SPARK-8695] [CORE] [MLLIB] TreeAggregation shouldn't be triggered when it... · b983d493
      Perinkulam I. Ganesh authored
      [SPARK-8695] [CORE] [MLLIB] TreeAggregation shouldn't be triggered when it doesn't save wall-clock time.
      
      Author: Perinkulam I. Ganesh <gip@us.ibm.com>
      
      Closes #7397 from piganesh/SPARK-8695 and squashes the following commits:
      
      041620c [Perinkulam I. Ganesh] [SPARK-8695][CORE][MLlib] TreeAggregation shouldn't be triggered when it doesn't save wall-clock time.
      9ad067c [Perinkulam I. Ganesh] [SPARK-8695] [core] [WIP] TreeAggregation shouldn't be triggered for 5 partitions
      a6fed07 [Perinkulam I. Ganesh] [SPARK-8695] [core] [WIP] TreeAggregation shouldn't be triggered for 5 partitions
      b983d493
    • Yijie Shen's avatar
      [SPARK-8935] [SQL] Implement code generation for all casts · 6d0d8b40
      Yijie Shen authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-8935
      
      Author: Yijie Shen <henry.yijieshen@gmail.com>
      
      Closes #7365 from yjshen/cast_codegen and squashes the following commits:
      
      ef6e8b5 [Yijie Shen] getColumn and setColumn in struct cast, autounboxing in array and map
      eaece18 [Yijie Shen] remove null case in cast code gen
      fd7eba4 [Yijie Shen] resolve comments
      80378a5 [Yijie Shen] the missing self cast
      611d66e [Yijie Shen] Bug fix: NullType & primitive object unboxing
      6d5c0fe [Yijie Shen] rebase and add Interval codegen
      9424b65 [Yijie Shen] tiny style fix
      4a1c801 [Yijie Shen] remove CodeHolder class, use function instead.
      3f5df88 [Yijie Shen] CodeHolder for complex dataTypes
      c286f13 [Yijie Shen] moved all the cast code into class body
      4edfd76 [Yijie Shen] [WIP] finished primitive part
      6d0d8b40
    • Liang-Chi Hsieh's avatar
      [SPARK-7254] [MLLIB] Run PowerIterationClustering directly on graph · 825ab1e4
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-7254
      
      Author: Liang-Chi Hsieh <viirya@appier.com>
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #6054 from viirya/pic_on_graph and squashes the following commits:
      
      8b87b81 [Liang-Chi Hsieh] Fix scala style.
      a22fb8b [Liang-Chi Hsieh] For comment.
      ef565a0 [Liang-Chi Hsieh] Fix indentation.
      d249aa1 [Liang-Chi Hsieh] Merge remote-tracking branch 'upstream/master' into pic_on_graph
      82d7351 [Liang-Chi Hsieh] Run PowerIterationClustering directly on graph.
      825ab1e4
    • Joseph K. Bradley's avatar
      [SPARK-9268] [ML] Removed varargs annotation from Params.setDefault taking multiple params · 410dd41c
      Joseph K. Bradley authored
      Removed varargs annotation from Params.setDefault taking multiple params.
      
      Though varargs is technically correct, it often requires that developers do clean assembly, rather than (not clean) assembly, which is a nuisance during development.
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #7604 from jkbradley/params-setdefault-varargs and squashes the following commits:
      
      6016dc6 [Joseph K. Bradley] removed varargs annotation from Params.setDefault taking multiple params
      410dd41c
  2. Jul 22, 2015
    • Xiangrui Meng's avatar
      [SPARK-8364] [SPARKR] Add crosstab to SparkR DataFrames · 2f5cbd86
      Xiangrui Meng authored
      Add `crosstab` to SparkR DataFrames, which takes two column names and returns a local R data.frame. This is similar to `table` in R. However, `table` in SparkR is used for loading SQL tables as DataFrames. The return type is data.frame instead table for `crosstab` to be compatible with Scala/Python.
      
      I couldn't run R tests successfully on my local. Many unit tests failed. So let's try Jenkins.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #7318 from mengxr/SPARK-8364 and squashes the following commits:
      
      d75e894 [Xiangrui Meng] fix tests
      53f6ddd [Xiangrui Meng] fix tests
      f1348d6 [Xiangrui Meng] update test
      47cb088 [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into SPARK-8364
      5621262 [Xiangrui Meng] first version without test
      2f5cbd86
    • Josh Rosen's avatar
      [SPARK-9144] Remove DAGScheduler.runLocallyWithinThread and spark.localExecution.enabled · b217230f
      Josh Rosen authored
      Spark has an option called spark.localExecution.enabled; according to the docs:
      
      > Enables Spark to run certain jobs, such as first() or take() on the driver, without sending tasks to the cluster. This can make certain jobs execute very quickly, but may require shipping a whole partition of data to the driver.
      
      This feature ends up adding quite a bit of complexity to DAGScheduler, especially in the runLocallyWithinThread method, but as far as I know nobody uses this feature (I searched the mailing list and haven't seen any recent mentions of the configuration nor stacktraces including the runLocally method). As a step towards scheduler complexity reduction, I propose that we remove this feature and all code related to it for Spark 1.5.
      
      This pull request simply brings #7484 up to date.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7585 from rxin/remove-local-exec and squashes the following commits:
      
      84bd10e [Reynold Xin] Python fix.
      1d9739a [Reynold Xin] Merge pull request #7484 from JoshRosen/remove-localexecution
      eec39fa [Josh Rosen] Remove allowLocal(); deprecate user-facing uses of it.
      b0835dc [Josh Rosen] Remove local execution code in DAGScheduler
      8975d96 [Josh Rosen] Remove local execution tests.
      ffa8c9b [Josh Rosen] Remove documentation for configuration
      b217230f
    • Reynold Xin's avatar
      [SPARK-9262][build] Treat Scala compiler warnings as errors · d71a13f4
      Reynold Xin authored
      I've seen a few cases in the past few weeks that the compiler is throwing warnings that are caused by legitimate bugs. This patch upgrades warnings to errors, except deprecation warnings.
      
      Note that ideally we should be able to mark deprecation warnings as errors as well. However, due to the lack of ability to suppress individual warning messages in the Scala compiler, we cannot do that (since we do need to access deprecated APIs in Hadoop).
      
      Most of the work are done by ericl.
      
      Author: Reynold Xin <rxin@databricks.com>
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #7598 from rxin/warnings and squashes the following commits:
      
      beb311b [Reynold Xin] Fixed tests.
      542c031 [Reynold Xin] Fixed one more warning.
      87c354a [Reynold Xin] Fixed all non-deprecation warnings.
      78660ac [Eric Liang] first effort to fix warnings
      d71a13f4
    • martinzapletal's avatar
      [SPARK-8484] [ML] Added TrainValidationSplit for hyper-parameter tuning. · a721ee52
      martinzapletal authored
      - [X] Added TrainValidationSplit for hyper-parameter tuning. It randomly splits the input dataset into train and validation and use evaluation metric on the validation set to select the best model. It should be similar to CrossValidator, but simpler and less expensive.
      - [X] Simplified replacement of https://github.com/apache/spark/pull/6996
      
      Author: martinzapletal <zapletal-martin@email.cz>
      
      Closes #7337 from zapletal-martin/SPARK-8484-TrainValidationSplit and squashes the following commits:
      
      cafc949 [martinzapletal] Review comments https://github.com/apache/spark/pull/7337.
      511b398 [martinzapletal] Merge remote-tracking branch 'upstream/master' into SPARK-8484-TrainValidationSplit
      f4fc9c4 [martinzapletal] SPARK-8484 Resolved feedback to https://github.com/apache/spark/pull/7337
      00c4f5a [martinzapletal] SPARK-8484. Styling.
      d699506 [martinzapletal] SPARK-8484. Styling.
      93ed2ee [martinzapletal] Styling.
      3bc1853 [martinzapletal] SPARK-8484. Styling.
      2aa6f43 [martinzapletal] SPARK-8484. Added TrainValidationSplit for hyper-parameter tuning. It randomly splits the input dataset into train and validation and use evaluation metric on the validation set to select the best model.
      21662eb [martinzapletal] SPARK-8484. Added TrainValidationSplit for hyper-parameter tuning. It randomly splits the input dataset into train and validation and use evaluation metric on the validation set to select the best model.
      a721ee52
    • MechCoder's avatar
      [SPARK-9223] [PYSPARK] [MLLIB] Support model save/load in LDA · 5307c9d3
      MechCoder authored
      Since save / load has been merged in LDA, it takes no time to write the wrappers in Python as well.
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #7587 from MechCoder/python_lda_save_load and squashes the following commits:
      
      c8e4ea7 [MechCoder] [SPARK-9223] [PySpark] Support model save/load in LDA
      5307c9d3
    • Kenichi Maehashi's avatar
      [SPARK-9180] fix spark-shell to accept --name option · 430cd781
      Kenichi Maehashi authored
      This patch fixes [[SPARK-9180]](https://issues.apache.org/jira/browse/SPARK-9180).
      Users can now set the app name of spark-shell using `spark-shell --name "whatever"`.
      
      Author: Kenichi Maehashi <webmaster@kenichimaehashi.com>
      
      Closes #7512 from kmaehashi/fix-spark-shell-app-name and squashes the following commits:
      
      e24991a [Kenichi Maehashi] use setIfMissing instead of setAppName
      18aa4ad [Kenichi Maehashi] fix spark-shell to accept --name option
      430cd781
    • Iulian Dragos's avatar
      [SPARK-8975] [STREAMING] Adds a mechanism to send a new rate from the driver to the block generator · 798dff7b
      Iulian Dragos authored
      First step for [SPARK-7398](https://issues.apache.org/jira/browse/SPARK-7398).
      
      tdas huitseeker
      
      Author: Iulian Dragos <jaguarul@gmail.com>
      Author: François Garillot <francois@garillot.net>
      
      Closes #7471 from dragos/topic/streaming-bp/dynamic-rate and squashes the following commits:
      
      8941cf9 [Iulian Dragos] Renames and other nitpicks.
      162d9e5 [Iulian Dragos] Use Reflection for accessing truly private `executor` method and use the listener bus to know when receivers have registered (`onStart` is called before receivers have registered, leading to flaky behavior).
      210f495 [Iulian Dragos] Revert "Added a few tests that measure the receiver’s rate."
      0c51959 [Iulian Dragos] Added a few tests that measure the receiver’s rate.
      261a051 [Iulian Dragos] - removed field to hold the current rate limit in rate limiter - made rate limit a Long and default to Long.MaxValue (consequence of the above) - removed custom `waitUntil` and replaced it by `eventually`
      cd1397d [Iulian Dragos] Add a test for the propagation of a new rate limit from driver to receivers.
      6369b30 [Iulian Dragos] Merge pull request #15 from huitseeker/SPARK-8975
      d15de42 [François Garillot] [SPARK-8975][Streaming] Adds Ratelimiter unit tests w.r.t. spark.streaming.receiver.maxRate
      4721c7d [François Garillot] [SPARK-8975][Streaming] Add a mechanism to send a new rate from the driver to the block generator
      798dff7b
    • Matei Zaharia's avatar
      [SPARK-9244] Increase some memory defaults · fe26584a
      Matei Zaharia authored
      There are a few memory limits that people hit often and that we could
      make higher, especially now that memory sizes have grown.
      
      - spark.akka.frameSize: This defaults at 10 but is often hit for map
        output statuses in large shuffles. This memory is not fully allocated
        up-front, so we can just make this larger and still not affect jobs
        that never sent a status that large. We increase it to 128.
      
      - spark.executor.memory: Defaults at 512m, which is really small. We
        increase it to 1g.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #7586 from mateiz/configs and squashes the following commits:
      
      ce0038a [Matei Zaharia] [SPARK-9244] Increase some memory defaults
      fe26584a
    • Feynman Liang's avatar
      [SPARK-8536] [MLLIB] Generalize OnlineLDAOptimizer to asymmetric document-topic Dirichlet priors · 1aca9c13
      Feynman Liang authored
      Modify `LDA` to take asymmetric document-topic prior distributions and `OnlineLDAOptimizer` to use the asymmetric prior during variational inference.
      
      This PR only generalizes `OnlineLDAOptimizer` and the associated `LocalLDAModel`; `EMLDAOptimizer` and `DistributedLDAModel` still only support symmetric `alpha` (checked during `EMLDAOptimizer.initialize`).
      
      Author: Feynman Liang <fliang@databricks.com>
      
      Closes #7575 from feynmanliang/SPARK-8536-LDA-asymmetric-priors and squashes the following commits:
      
      af8fbb7 [Feynman Liang] Fix merge errors
      ef5821d [Feynman Liang] Merge remote-tracking branch 'apache/master' into SPARK-8536-LDA-asymmetric-priors
      58f1d7b [Feynman Liang] Fix from review feedback
      a6dcf70 [Feynman Liang] Change docConcentration interface and move LDAOptimizer validation to initialize, add sad path tests
      72038ff [Feynman Liang] Add tests referenced against gensim
      d4284fa [Feynman Liang] Generalize OnlineLDA to asymmetric priors, no tests
      1aca9c13
    • Yin Huai's avatar
      [SPARK-4366] [SQL] [Follow-up] Fix SqlParser compiling warning. · cf21d05f
      Yin Huai authored
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #7588 from yhuai/SPARK-4366-update1 and squashes the following commits:
      
      25f5f36 [Yin Huai] Fix SqlParser Warning.
      cf21d05f
    • Feynman Liang's avatar
      [SPARK-9224] [MLLIB] OnlineLDA Performance Improvements · 8486cd85
      Feynman Liang authored
      In-place updates, reduce number of transposes, and vectorize operations in OnlineLDA implementation.
      
      Author: Feynman Liang <fliang@databricks.com>
      
      Closes #7454 from feynmanliang/OnlineLDA-perf-improvements and squashes the following commits:
      
      78b0f5a [Feynman Liang] Make in-place variables vals, fix BLAS error
      7f62a55 [Feynman Liang] --amend
      c62cb1e [Feynman Liang] Outer product for stats, revert Range slicing
      aead650 [Feynman Liang] Range slice, in-place update, reduce transposes
      8486cd85
    • Davies Liu's avatar
      [SPARK-9024] Unsafe HashJoin/HashOuterJoin/HashSemiJoin · e0b7ba59
      Davies Liu authored
      This PR introduce unsafe version (using UnsafeRow) of HashJoin, HashOuterJoin and HashSemiJoin, including the broadcast one and shuffle one (except FullOuterJoin, which is better to be implemented using SortMergeJoin).
      
      It use HashMap to store UnsafeRow right now, will change to use BytesToBytesMap for better performance (in another PR).
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7480 from davies/unsafe_join and squashes the following commits:
      
      6294b1e [Davies Liu] fix projection
      10583f1 [Davies Liu] Merge branch 'master' of github.com:apache/spark into unsafe_join
      dede020 [Davies Liu] fix test
      84c9807 [Davies Liu] address comments
      a05b4f6 [Davies Liu] support UnsafeRow in LeftSemiJoinBNL and BroadcastNestedLoopJoin
      611d2ed [Davies Liu] Merge branch 'master' of github.com:apache/spark into unsafe_join
      9481ae8 [Davies Liu] return UnsafeRow after join()
      ca2b40f [Davies Liu] revert unrelated change
      68f5cd9 [Davies Liu] Merge branch 'master' of github.com:apache/spark into unsafe_join
      0f4380d [Davies Liu] ada a comment
      69e38f5 [Davies Liu] Merge branch 'master' of github.com:apache/spark into unsafe_join
      1a40f02 [Davies Liu] refactor
      ab1690f [Davies Liu] address comments
      60371f2 [Davies Liu] use UnsafeRow in SemiJoin
      a6c0b7d [Davies Liu] Merge branch 'master' of github.com:apache/spark into unsafe_join
      184b852 [Davies Liu] fix style
      6acbb11 [Davies Liu] fix tests
      95d0762 [Davies Liu] remove println
      bea4a50 [Davies Liu] Unsafe HashJoin
      e0b7ba59
    • Yijie Shen's avatar
      [SPARK-9165] [SQL] codegen for CreateArray, CreateStruct and CreateNamedStruct · 86f80e2b
      Yijie Shen authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-9165
      
      Author: Yijie Shen <henry.yijieshen@gmail.com>
      
      Closes #7537 from yjshen/array_struct_codegen and squashes the following commits:
      
      3a6dce6 [Yijie Shen] use infix notion in createArray test
      5e90f0a [Yijie Shen] resolve comments: classOf
      39cefb8 [Yijie Shen] codegen for createArray createStruct & createNamedStruct
      86f80e2b
    • Wenchen Fan's avatar
      [SPARK-9082] [SQL] Filter using non-deterministic expressions should not be pushed down · 76520955
      Wenchen Fan authored
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #7446 from cloud-fan/filter and squashes the following commits:
      
      330021e [Wenchen Fan] add exists to tree node
      2cab68c [Wenchen Fan] more enhance
      949be07 [Wenchen Fan] push down part of predicate if possible
      3912f84 [Wenchen Fan] address comments
      8ce15ca [Wenchen Fan] fix bug
      557158e [Wenchen Fan] Filter using non-deterministic expressions should not be pushed down
      76520955
    • Cheng Lian's avatar
      [SPARK-9254] [BUILD] [HOTFIX] sbt-launch-lib.bash should support HTTP/HTTPS redirection · b55a36bc
      Cheng Lian authored
      Target file(s) can be hosted on CDN nodes. HTTP/HTTPS redirection must be supported to download these files.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #7597 from liancheng/spark-9254 and squashes the following commits:
      
      fd266ca [Cheng Lian] Uses `--fail' to make curl return non-zero value and remove garbage output when the download fails
      a7cbfb3 [Cheng Lian] Supports HTTP/HTTPS redirection
      b55a36bc
    • Yin Huai's avatar
      [SPARK-4233] [SPARK-4367] [SPARK-3947] [SPARK-3056] [SQL] Aggregation Improvement · c03299a1
      Yin Huai authored
      This is the first PR for the aggregation improvement, which is tracked by https://issues.apache.org/jira/browse/SPARK-4366 (umbrella JIRA). This PR contains work for its subtasks, SPARK-3056, SPARK-3947, SPARK-4233, and SPARK-4367.
      
      This PR introduces a new code path for evaluating aggregate functions. This code path is guarded by `spark.sql.useAggregate2` and by default the value of this flag is true.
      
      This new code path contains:
      * A new aggregate function interface (`AggregateFunction2`) and 7 built-int aggregate functions based on this new interface (`AVG`, `COUNT`, `FIRST`, `LAST`, `MAX`, `MIN`, `SUM`)
      * A UDAF interface (`UserDefinedAggregateFunction`) based on the new code path and two example UDAFs (`MyDoubleAvg` and `MyDoubleSum`).
      * A sort-based aggregate operator (`Aggregate2Sort`) for the new aggregate function interface .
      * A sort-based aggregate operator (`FinalAndCompleteAggregate2Sort`) for distinct aggregations (for distinct aggregations the query plan will use `Aggregate2Sort` and `FinalAndCompleteAggregate2Sort` together).
      
      With this change, `spark.sql.useAggregate2` is `true`, the flow of compiling an aggregation query is:
      1. Our analyzer looks up functions and returns aggregate functions built based on the old aggregate function interface.
      2. When our planner is compiling the physical plan, it tries try to convert all aggregate functions to the ones built based on the new interface. The planner will fallback to the old code path if any of the following two conditions is true:
      * code-gen is disabled.
      * there is any function that cannot be converted (right now, Hive UDAFs).
      * the schema of grouping expressions contain any complex data type.
      * There are multiple distinct columns.
      
      Right now, the new code path handles a single distinct column in the query (you can have multiple aggregate functions using that distinct column). For a query having a aggregate function with DISTINCT and regular aggregate functions, the generated plan will do partial aggregations for those regular aggregate function.
      
      Thanks chenghao-intel for his initial work on it.
      
      Author: Yin Huai <yhuai@databricks.com>
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #7458 from yhuai/UDAF and squashes the following commits:
      
      7865f5e [Yin Huai] Put the catalyst expression in the comment of the generated code for it.
      b04d6c8 [Yin Huai] Remove unnecessary change.
      f1d5901 [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      35b0520 [Yin Huai] Use semanticEquals to replace grouping expressions in the output of the aggregate operator.
      3b43b24 [Yin Huai] bug fix.
      00eb298 [Yin Huai] Make it compile.
      a3ca551 [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      e0afca3 [Yin Huai] Gracefully fallback to old aggregation code path.
      8a8ac4a [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      88c7d4d [Yin Huai] Enable spark.sql.useAggregate2 by default for testing purpose.
      dc96fd1 [Yin Huai] Many updates:
      85c9c4b [Yin Huai] newline.
      43de3de [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      c3614d7 [Yin Huai] Handle single distinct column.
      68b8ee9 [Yin Huai] Support single distinct column set. WIP
      3013579 [Yin Huai] Format.
      d678aee [Yin Huai] Remove AggregateExpressionSuite.scala since our built-in aggregate functions will be based on AlgebraicAggregate and we need to have another way to test it.
      e243ca6 [Yin Huai] Add aggregation iterators.
      a101960 [Yin Huai] Change MyJavaUDAF to MyDoubleSum.
      594cdf5 [Yin Huai] Change existing AggregateExpression to AggregateExpression1 and add an AggregateExpression as the common interface for both AggregateExpression1 and AggregateExpression2.
      380880f [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      0a827b3 [Yin Huai] Add comments and doc. Move some classes to the right places.
      a19fea6 [Yin Huai] Add UDAF interface.
      262d4c4 [Yin Huai] Make it compile.
      b2e358e [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      6edb5ac [Yin Huai] Format update.
      70b169c [Yin Huai] Remove groupOrdering.
      4721936 [Yin Huai] Add CheckAggregateFunction to extendedCheckRules.
      d821a34 [Yin Huai] Cleanup.
      32aea9c [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      5b46d41 [Yin Huai] Bug fix.
      aff9534 [Yin Huai] Make Aggregate2Sort work with both algebraic AggregateFunctions and non-algebraic AggregateFunctions.
      2857b55 [Yin Huai] Merge remote-tracking branch 'upstream/master' into UDAF
      4435f20 [Yin Huai] Add ConvertAggregateFunction to HiveContext's analyzer.
      1b490ed [Michael Armbrust] make hive test
      8cfa6a9 [Michael Armbrust] add test
      1b0bb3f [Yin Huai] Do not bind references in AlgebraicAggregate and use code gen for all places.
      072209f [Yin Huai] Bug fix: Handle expressions in grouping columns that are not attribute references.
      f7d9e54 [Michael Armbrust] Merge remote-tracking branch 'apache/master' into UDAF
      39ee975 [Yin Huai] Code cleanup: Remove unnecesary AttributeReferences.
      b7720ba [Yin Huai] Add an analysis rule to convert aggregate function to the new version.
      5c00f3f [Michael Armbrust] First draft of codegen
      6bbc6ba [Michael Armbrust] now with correct answers\!
      f7996d0 [Michael Armbrust] Add AlgebraicAggregate
      dded1c5 [Yin Huai] wip
      c03299a1
    • Andrew Or's avatar
      [SPARK-9232] [SQL] Duplicate code in JSONRelation · f4785f5b
      Andrew Or authored
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #7576 from andrewor14/clean-up-json-relation and squashes the following commits:
      
      ea80803 [Andrew Or] Clean up duplicate code
      f4785f5b
    • Yu ISHIKAWA's avatar
      [SPARK-9121] [SPARKR] Get rid of the warnings about `no visible global... · 63f4bcc7
      Yu ISHIKAWA authored
      [SPARK-9121] [SPARKR] Get rid of the warnings about `no visible global function definition` in SparkR
      
      [[SPARK-9121] Get rid of the warnings about `no visible global function definition` in SparkR - ASF JIRA](https://issues.apache.org/jira/browse/SPARK-9121)
      
      ## The Result of `dev/lint-r`
      [The result of lint-r for SPARK-9121 at the revision:1ddd0f2f when I have sent a PR](https://gist.github.com/yu-iskw/6f55953425901725edf6)
      
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      
      Closes #7567 from yu-iskw/SPARK-9121 and squashes the following commits:
      
      c8cfd63 [Yu ISHIKAWA] Fix the typo
      b1f19ed [Yu ISHIKAWA] Add a validate statement for local SparkR
      1a03987 [Yu ISHIKAWA] Load the `testthat` package in `dev/lint-r.R`, instead of using the full path of function.
      3a5e0ab [Yu ISHIKAWA] [SPARK-9121][SparkR] Get rid of the warnings about `no visible global function definition` in SparkR
      63f4bcc7
  3. Jul 21, 2015
    • Reynold Xin's avatar
      [SPARK-9154][SQL] Rename formatString to format_string. · a4c83cb1
      Reynold Xin authored
      Also make format_string the canonical form, rather than printf.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7579 from rxin/format_strings and squashes the following commits:
      
      53ee54f [Reynold Xin] Fixed unit tests.
      52357e1 [Reynold Xin] Add format_string alias.
      b40a42a [Reynold Xin] [SPARK-9154][SQL] Rename formatString to format_string.
      a4c83cb1
    • Tarek Auel's avatar
      [SPARK-9154] [SQL] codegen StringFormat · d4c7a7a3
      Tarek Auel authored
      Jira: https://issues.apache.org/jira/browse/SPARK-9154
      
      fixes bug of #7546
      
      marmbrus I can't reopen the other PR, because I didn't closed it. Can you trigger Jenkins?
      
      Author: Tarek Auel <tarek.auel@googlemail.com>
      
      Closes #7571 from tarekauel/SPARK-9154 and squashes the following commits:
      
      dcae272 [Tarek Auel] [SPARK-9154][SQL] build fix
      1487602 [Tarek Auel] Merge remote-tracking branch 'upstream/master' into SPARK-9154
      f512c5f [Tarek Auel] [SPARK-9154][SQL] build fix
      a943d3e [Tarek Auel] [SPARK-9154] implicit input cast, added tests for null, support for null primitives
      10b4de8 [Tarek Auel] [SPARK-9154][SQL] codegen removed fallback trait
      cd8322b [Tarek Auel] [SPARK-9154][SQL] codegen string format
      086caba [Tarek Auel] [SPARK-9154][SQL] codegen string format
      d4c7a7a3
    • Dennis Huo's avatar
      [SPARK-9206] [SQL] Fix HiveContext classloading for GCS connector. · c07838b5
      Dennis Huo authored
      IsolatedClientLoader.isSharedClass includes all of com.google.\*, presumably
      for Guava, protobuf, and/or other shared Google libraries, but needs to
      count com.google.cloud.\* as "hive classes" when determining which ClassLoader
      to use. Otherwise, things like HiveContext.parquetFile will throw a
      ClassCastException when fs.defaultFS is set to a Google Cloud Storage (gs://)
      path. On StackOverflow: http://stackoverflow.com/questions/31478955
      
      EDIT: Adding yhuai who worked on the relevant classloading isolation pieces.
      
      Author: Dennis Huo <dhuo@google.com>
      
      Closes #7549 from dennishuo/dhuo-fix-hivecontext-gcs and squashes the following commits:
      
      1f8db07 [Dennis Huo] Fix HiveContext classloading for GCS connector.
      c07838b5
    • Reynold Xin's avatar
      [SPARK-8906][SQL] Move all internal data source classes into execution.datasources. · 60c0ce13
      Reynold Xin authored
      This way, the sources package contains only public facing interfaces.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7565 from rxin/move-ds and squashes the following commits:
      
      7661aff [Reynold Xin] Mima
      9d5196a [Reynold Xin] Rearranged imports.
      3dd7174 [Reynold Xin] [SPARK-8906][SQL] Move all internal data source classes into execution.datasources.
      60c0ce13
    • navis.ryu's avatar
      [SPARK-8357] Fix unsafe memory leak on empty inputs in GeneratedAggregate · 9ba7c64d
      navis.ryu authored
      This patch fixes a managed memory leak in GeneratedAggregate.  The leak occurs when the unsafe aggregation path is used to perform grouped aggregation on an empty input; in this case, GeneratedAggregate allocates an UnsafeFixedWidthAggregationMap that is never cleaned up because `next()` is never called on the aggregate result iterator.
      
      This patch fixes this by short-circuiting on empty inputs.
      
      This patch is an updated version of #6810.
      
      Closes #6810.
      
      Author: navis.ryu <navis@apache.org>
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7560 from JoshRosen/SPARK-8357 and squashes the following commits:
      
      3486ce4 [Josh Rosen] Some minor cleanup
      c649310 [Josh Rosen] Revert SparkPlan change:
      3c7db0f [Josh Rosen] Merge remote-tracking branch 'origin/master' into SPARK-8357
      adc8239 [Josh Rosen] Back out Projection changes.
      c5419b3 [navis.ryu] addressed comments
      143e1ef [navis.ryu] fixed format & added test for CCE case
      735972f [navis.ryu] used new conf apis
      1a02a55 [navis.ryu] Rolled-back test-conf cleanup & fixed possible CCE & added more tests
      51178e8 [navis.ryu] addressed comments
      4d326b9 [navis.ryu] fixed test fails
      15c5afc [navis.ryu] added a test as suggested by JoshRosen
      d396589 [navis.ryu] added comments
      1b07556 [navis.ryu] [SPARK-8357] [SQL] Memory leakage on unsafe aggregation path with empty input
      9ba7c64d
    • Michael Armbrust's avatar
      Revert "[SPARK-9154] [SQL] codegen StringFormat" · 87d890cc
      Michael Armbrust authored
      This reverts commit 7f072c3d.
      
      Revert #7546
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #7570 from marmbrus/revert9154 and squashes the following commits:
      
      ed2c32a [Michael Armbrust] Revert "[SPARK-9154] [SQL] codegen StringFormat"
      87d890cc
    • MechCoder's avatar
      [SPARK-5989] [MLLIB] Model save/load for LDA · 89db3c0b
      MechCoder authored
      Add support for saving and loading LDA both the local and distributed versions.
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #6948 from MechCoder/lda_save_load and squashes the following commits:
      
      49bcdce [MechCoder] minor style fixes
      cc14054 [MechCoder] minor
      4587d1d [MechCoder] Minor changes
      c753122 [MechCoder] Load and save the model in private methods
      2782326 [MechCoder] [SPARK-5989] Model save/load for LDA
      89db3c0b
    • Tarek Auel's avatar
      [SPARK-9154] [SQL] codegen StringFormat · 7f072c3d
      Tarek Auel authored
      Jira: https://issues.apache.org/jira/browse/SPARK-9154
      
      Author: Tarek Auel <tarek.auel@googlemail.com>
      
      Closes #7546 from tarekauel/SPARK-9154 and squashes the following commits:
      
      a943d3e [Tarek Auel] [SPARK-9154] implicit input cast, added tests for null, support for null primitives
      10b4de8 [Tarek Auel] [SPARK-9154][SQL] codegen removed fallback trait
      cd8322b [Tarek Auel] [SPARK-9154][SQL] codegen string format
      086caba [Tarek Auel] [SPARK-9154][SQL] codegen string format
      7f072c3d
    • zsxwing's avatar
      [SPARK-5423] [CORE] Register a TaskCompletionListener to make sure release all resources · d45355ee
      zsxwing authored
      Make `DiskMapIterator.cleanup` idempotent and register a TaskCompletionListener to make sure call `cleanup`.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #7529 from zsxwing/SPARK-5423 and squashes the following commits:
      
      3e3c413 [zsxwing] Remove TODO
      9556c78 [zsxwing] Fix NullPointerException for tests
      3d574d9 [zsxwing] Register a TaskCompletionListener to make sure release all resources
      d45355ee
    • zsxwing's avatar
      [SPARK-4598] [WEBUI] Task table pagination for the Stage page · 4f7f1ee3
      zsxwing authored
      This PR adds pagination for the task table to solve the scalability issue of the stage page. Here is the initial screenshot:
      <img width="1347" alt="pagination" src="https://cloud.githubusercontent.com/assets/1000778/8679669/9e63863c-2a8e-11e5-94e4-994febcd6717.png">
      The task table only shows 100 tasks. There is a page navigation above the table. Users can click the page navigation or type the page number to jump to another page. The table can be sorted by clicking the headers. However, unlike previous implementation, the sorting work is done in the server now. So clicking a table column to sort needs to refresh the web page.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #7399 from zsxwing/task-table-pagination and squashes the following commits:
      
      144f513 [zsxwing] Display the page navigation when the page number is out of range
      a3eee22 [zsxwing] Add extra space for the error message
      54c5b84 [zsxwing] Reset page to 1 if the user changes the page size
      c2f7f39 [zsxwing] Add a text field to let users fill the page size
      bad52eb [zsxwing] Display user-friendly error messages
      410586b [zsxwing] Scroll down to the tasks table if the url contains any sort column
      a0746d1 [zsxwing] Use expand-dag-viz-arrow-job and expand-dag-viz-arrow-stage instead of expand-dag-viz-arrow-true and expand-dag-viz-arrow-false
      b123f67 [zsxwing] Use localStorage to remember the user's actions and replay them when loading the page
      894a342 [zsxwing] Show the link cursor when hovering for headers and page links and other minor fix
      4d4fecf [zsxwing] Address Carson's comments
      d9285f0 [zsxwing] Add comments and fix the style
      74285fa [zsxwing] Merge branch 'master' into task-table-pagination
      db6c859 [zsxwing] Task table pagination for the Stage page
      4f7f1ee3
    • Jacek Lewandowski's avatar
      [SPARK-7171] Added a method to retrieve metrics sources in TaskContext · 31954910
      Jacek Lewandowski authored
      Author: Jacek Lewandowski <lewandowski.jacek@gmail.com>
      
      Closes #5805 from jacek-lewandowski/SPARK-7171 and squashes the following commits:
      
      ed20bda [Jacek Lewandowski] SPARK-7171: Added a method to retrieve metrics sources in TaskContext
      31954910
    • Liang-Chi Hsieh's avatar
      [SPARK-9128] [CORE] Get outerclasses and objects with only one method calling in ClosureCleaner · 9a4fd875
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-9128
      
      Currently, in `ClosureCleaner`, the outerclasses and objects are retrieved using two different methods. However, the logic of the two methods is the same, and we can get both the outerclasses and objects with only one method calling.
      
      Author: Liang-Chi Hsieh <viirya@appier.com>
      
      Closes #7459 from viirya/remove_extra_closurecleaner and squashes the following commits:
      
      7c9858d [Liang-Chi Hsieh] For comments.
      a096941 [Liang-Chi Hsieh] Merge remote-tracking branch 'upstream/master' into remove_extra_closurecleaner
      2ec5ce1 [Liang-Chi Hsieh] Remove unnecessary methods.
      4df5a51 [Liang-Chi Hsieh] Merge remote-tracking branch 'upstream/master' into remove_extra_closurecleaner
      dc110d1 [Liang-Chi Hsieh] Add method to get outerclasses and objects at the same time.
      9a4fd875
    • Ben's avatar
      [SPARK-9036] [CORE] SparkListenerExecutorMetricsUpdate messages not included in JsonProtocol · f67da43c
      Ben authored
      This PR implements a JSON serializer and deserializer in the JSONProtocol to handle the (de)serialization of SparkListenerExecutorMetricsUpdate events. It also includes a unit test in the JSONProtocolSuite file. This was implemented to satisfy the improvement request in the JIRA  issue SPARK-9036.
      
      Author: Ben <benjaminpiering@gmail.com>
      
      Closes #7555 from NamelessAnalyst/master and squashes the following commits:
      
      fb4e3cc [Ben] Update JSON Protocol and tests
      aa69517 [Ben] Update JSON Protocol and tests --Corrected Stage Attempt to Stage Attempt ID
      33e5774 [Ben] Update JSON Protocol Tests
      3f237e7 [Ben] Update JSON Protocol Tests
      84ca798 [Ben] Update JSON Protocol Tests
      cde57a0 [Ben] Update JSON Protocol Tests
      8049600 [Ben] Update JSON Protocol Tests
      c5bc061 [Ben] Update JSON Protocol Tests
      6f25785 [Ben] Merge remote-tracking branch 'origin/master'
      df2a609 [Ben] Update JSON Protocol
      dcda80b [Ben] Update JSON Protocol
      f67da43c
    • Grace's avatar
      [SPARK-9193] Avoid assigning tasks to "lost" executor(s) · 6592a605
      Grace authored
      Now, when some executors are killed by dynamic-allocation, it leads to some mis-assignment onto lost executors sometimes. Such kind of mis-assignment causes task failure(s) or even job failure if it repeats that errors for 4 times.
      
      The root cause is that ***killExecutors*** doesn't remove those executors under killing ASAP. It depends on the ***OnDisassociated*** event to refresh the active working list later. The delay time really depends on your cluster status (from several milliseconds to sub-minute). When new tasks to be scheduled during that period of time, it will be assigned to those "active" but "under killing" executors. Then the tasks will be failed due to "executor lost". The better way is to exclude those executors under killing in the makeOffers(). Then all those tasks won't be allocated onto those executors "to be lost" any more.
      
      Author: Grace <jie.huang@intel.com>
      
      Closes #7528 from GraceH/AssignToLostExecutor and squashes the following commits:
      
      ecc1da6 [Grace] scala style fix
      6e2ed96 [Grace] Re-word makeOffers by more readable lines
      b5546ce [Grace] Add comments about the fix
      30a9ad0 [Grace] Avoid assigning tasks to lost executors
      6592a605
Loading