Skip to content
Snippets Groups Projects
  1. Jul 16, 2015
    • Jan Prach's avatar
      [SPARK-9015] [BUILD] Clean project import in scala ide · b536d5dc
      Jan Prach authored
      Cleanup maven for a clean import in scala-ide / eclipse.
      
      * remove groovy plugin which is really not needed at all
      * add-source from build-helper-maven-plugin is not needed as recent version of scala-maven-plugin do it automatically
      * add lifecycle-mapping plugin to hide a few useless warnings from ide
      
      Author: Jan Prach <jendap@gmail.com>
      
      Closes #7375 from jendap/clean-project-import-in-scala-ide and squashes the following commits:
      
      c4b4c0f [Jan Prach] fix whitespaces
      5a83e07 [Jan Prach] Revert "remove java compiler warnings from java tests"
      312007e [Jan Prach] scala-maven-plugin itself add scala sources by default
      f47d856 [Jan Prach] remove spark-1.4-staging repository
      c8a54db [Jan Prach] remove java compiler warnings from java tests
      999a068 [Jan Prach] remove some maven warnings in scala ide
      80fbdc5 [Jan Prach] remove groovy and gmavenplus plugin
      b536d5dc
    • Tarek Auel's avatar
      [SPARK-8995] [SQL] cast date strings like '2015-01-01 12:15:31' to date · 4ea6480a
      Tarek Auel authored
      Jira https://issues.apache.org/jira/browse/SPARK-8995
      
      In PR #6981we noticed that we cannot cast date strings that contains a time, like '2015-03-18 12:39:40' to date. Besides it's not possible to cast a string like '18:03:20' to a timestamp.
      
      If a time is passed without a date, today is inferred as date.
      
      Author: Tarek Auel <tarek.auel@googlemail.com>
      Author: Tarek Auel <tarek.auel@gmail.com>
      
      Closes #7353 from tarekauel/SPARK-8995 and squashes the following commits:
      
      14f333b [Tarek Auel] [SPARK-8995] added tests for daylight saving time
      ca1ae69 [Tarek Auel] [SPARK-8995] style fix
      d20b8b4 [Tarek Auel] [SPARK-8995] bug fix: distinguish between 0 and null
      ef05753 [Tarek Auel] [SPARK-8995] added check for year >= 1000
      01c9ff3 [Tarek Auel] [SPARK-8995] support for time strings
      34ec573 [Tarek Auel] fixed style
      71622c0 [Tarek Auel] improved timestamp and date parsing
      0e30c0a [Tarek Auel] Hive compatibility
      cfbaed7 [Tarek Auel] fixed wrong checks
      71f89c1 [Tarek Auel] [SPARK-8995] minor style fix
      f7452fa [Tarek Auel] [SPARK-8995] removed old timestamp parsing
      30e5aec [Tarek Auel] [SPARK-8995] date and timestamp cast
      c1083fb [Tarek Auel] [SPARK-8995] cast date strings like '2015-01-01 12:15:31' to date or timestamp
      4ea6480a
    • Daniel Darabos's avatar
      [SPARK-8893] Add runtime checks against non-positive number of partitions · 01155162
      Daniel Darabos authored
      https://issues.apache.org/jira/browse/SPARK-8893
      
      > What does `sc.parallelize(1 to 3).repartition(p).collect` return? I would expect `Array(1, 2, 3)` regardless of `p`. But if `p` < 1, it returns `Array()`. I think instead it should throw an `IllegalArgumentException`.
      
      > I think the case is pretty clear for `p` < 0. But the behavior for `p` = 0 is also error prone. In fact that's how I found this strange behavior. I used `rdd.repartition(a/b)` with positive `a` and `b`, but `a/b` was rounded down to zero and the results surprised me. I'd prefer an exception instead of unexpected (corrupt) results.
      
      Author: Daniel Darabos <darabos.daniel@gmail.com>
      
      Closes #7285 from darabos/patch-1 and squashes the following commits:
      
      decba82 [Daniel Darabos] Allow repartitioning empty RDDs to zero partitions.
      97de852 [Daniel Darabos] Allow zero partition count in HashPartitioner
      f6ba5fb [Daniel Darabos] Use require() for simpler syntax.
      d5e3df8 [Daniel Darabos] Require positive number of partitions in HashPartitioner
      897c628 [Daniel Darabos] Require positive maxPartitions in CoalescedRDD
      01155162
    • Liang-Chi Hsieh's avatar
      [SPARK-8807] [SPARKR] Add between operator in SparkR · 0a795336
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-8807
      
      Add between operator in SparkR.
      
      Author: Liang-Chi Hsieh <viirya@appier.com>
      
      Closes #7356 from viirya/add_r_between and squashes the following commits:
      
      7f51b44 [Liang-Chi Hsieh] Add test for non-numeric column.
      c6a25c5 [Liang-Chi Hsieh] Add between function.
      0a795336
    • Cheng Hao's avatar
      [SPARK-8972] [SQL] Incorrect result for rollup · e2721231
      Cheng Hao authored
      We don't support the complex expression keys in the rollup/cube, and we even will not report it if we have the complex group by keys, that will cause very confusing/incorrect result.
      
      e.g. `SELECT key%100 FROM src GROUP BY key %100 with ROLLUP`
      
      This PR adds an additional project during the analyzing for the complex GROUP BY keys, and that projection will be the child of `Expand`, so to `Expand`, the GROUP BY KEY are always the simple key(attribute names).
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #7343 from chenghao-intel/expand and squashes the following commits:
      
      1ebbb59 [Cheng Hao] update the comment
      827873f [Cheng Hao] update as feedback
      34def69 [Cheng Hao] Add more unit test and comments
      c695760 [Cheng Hao] fix bug of incorrect result for rollup
      e2721231
    • Wenchen Fan's avatar
      [SPARK-9068][SQL] refactor the implicit type cast code · ba330968
      Wenchen Fan authored
      based on https://github.com/apache/spark/pull/7348
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #7420 from cloud-fan/type-check and squashes the following commits:
      
      7633fa9 [Wenchen Fan] revert
      fe169b0 [Wenchen Fan] improve test
      03b70da [Wenchen Fan] enhance implicit type cast
      ba330968
  2. Jul 15, 2015
    • Cheng Hao's avatar
      [SPARK-8245][SQL] FormatNumber/Length Support for Expression · 42dea3ac
      Cheng Hao authored
      - `BinaryType` for `Length`
      - `FormatNumber`
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #7034 from chenghao-intel/expression and squashes the following commits:
      
      e534b87 [Cheng Hao] python api style issue
      601bbf5 [Cheng Hao] add python API support
      3ebe288 [Cheng Hao] update as feedback
      52274f7 [Cheng Hao] add support for udf_format_number and length for binary
      42dea3ac
    • Yin Huai's avatar
      [SPARK-9060] [SQL] Revert SPARK-8359, SPARK-8800, and SPARK-8677 · 9c64a75b
      Yin Huai authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-9060
      
      This PR reverts:
      * https://github.com/apache/spark/commit/31bd30687bc29c0e457c37308d489ae2b6e5b72a (SPARK-8359)
      * https://github.com/apache/spark/commit/24fda7381171738cbbbacb5965393b660763e562 (SPARK-8677)
      * https://github.com/apache/spark/commit/4b5cfc988f23988c2334882a255d494fc93d252e (SPARK-8800)
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #7426 from yhuai/SPARK-9060 and squashes the following commits:
      
      651264d [Yin Huai] Revert "[SPARK-8359] [SQL] Fix incorrect decimal precision after multiplication"
      cfda7e4 [Yin Huai] Revert "[SPARK-8677] [SQL] Fix non-terminating decimal expansion for decimal divide operation"
      2de9afe [Yin Huai] Revert "[SPARK-8800] [SQL] Fix inaccurate precision/scale of Decimal division operation"
      9c64a75b
    • Xiangrui Meng's avatar
      [SPARK-9018] [MLLIB] add stopwatches · 73d92b00
      Xiangrui Meng authored
      Add stopwatches for easy instrumentation of MLlib algorithms. This is based on the `TimeTracker` used in decision trees. The distributed version uses Spark accumulator. jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #7415 from mengxr/SPARK-9018 and squashes the following commits:
      
      40b4347 [Xiangrui Meng] == -> ===
      c477745 [Xiangrui Meng] address Joseph's comments
      f981a49 [Xiangrui Meng] add stopwatches
      73d92b00
    • Eric Liang's avatar
      [SPARK-8774] [ML] Add R model formula with basic support as a transformer · 6960a793
      Eric Liang authored
      This implements minimal R formula support as a feature transformer. Both numeric and string labels are supported, but features must be numeric for now.
      
      cc mengxr
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #7381 from ericl/spark-8774-1 and squashes the following commits:
      
      d1959d2 [Eric Liang] clarify comment
      2db68aa [Eric Liang] second round of comments
      dc3c943 [Eric Liang] address comments
      5765ec6 [Eric Liang] fix style checks
      1f361b0 [Eric Liang] doc
      fb0826b [Eric Liang] [SPARK-8774] Add R model formula with basic support as a transformer
      6960a793
    • Reynold Xin's avatar
      [SPARK-9086][SQL] Remove BinaryNode from TreeNode. · b0645195
      Reynold Xin authored
      These traits are not super useful, and yet cause problems with toString in expressions due to the orders they are mixed in.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7433 from rxin/remove-binary-node and squashes the following commits:
      
      1881f78 [Reynold Xin] [SPARK-9086][SQL] Remove BinaryNode from TreeNode.
      b0645195
    • Reynold Xin's avatar
      [SPARK-9071][SQL] MonotonicallyIncreasingID and SparkPartitionID should be... · affbe329
      Reynold Xin authored
      [SPARK-9071][SQL] MonotonicallyIncreasingID and SparkPartitionID should be marked as nondeterministic.
      
      I also took the chance to more explicitly define the semantics of deterministic.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7428 from rxin/non-deterministic and squashes the following commits:
      
      a760827 [Reynold Xin] [SPARK-9071][SQL] MonotonicallyIncreasingID and SparkPartitionID should be marked as nondeterministic.
      affbe329
    • KaiXinXiaoLei's avatar
      [SPARK-8974] Catch exceptions in allocation schedule task. · 674eb2a4
      KaiXinXiaoLei authored
      I meet a problem. When I submit some tasks, the thread spark-dynamic-executor-allocation should seed the message about "requestTotalExecutors", and the new executor should start. But I meet a problem about this thread, like:
      
      2015-07-14 19:02:17,461 | WARN  | [spark-dynamic-executor-allocation] | Error sending message [message = RequestExecutors(1)] in 1 attempts
      java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
              at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
              at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
              at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
              at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
              at scala.concurrent.Await$.result(package.scala:107)
              at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
              at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
              at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.doRequestTotalExecutors(YarnSchedulerBackend.scala:57)
              at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:351)
              at org.apache.spark.SparkContext.requestTotalExecutors(SparkContext.scala:1382)
              at org.apache.spark.ExecutorAllocationManager.addExecutors(ExecutorAllocationManager.scala:343)
              at org.apache.spark.ExecutorAllocationManager.updateAndSyncNumExecutorsTarget(ExecutorAllocationManager.scala:295)
              at org.apache.spark.ExecutorAllocationManager.org$apache$spark$ExecutorAllocationManager$$schedule(ExecutorAllocationManager.scala:248)
      
      when after some minutes, I find a new ApplicationMaster start,  and tasks submitted start to run. The tasks Completed. And after long time (eg, ten minutes), the number of executor  does not reduce to zero.  I use the default value of "spark.dynamicAllocation.minExecutors".
      
      Author: KaiXinXiaoLei <huleilei1@huawei.com>
      
      Closes #7352 from KaiXinXiaoLei/dym and squashes the following commits:
      
      3603631 [KaiXinXiaoLei] change logError to logWarning
      efc4f24 [KaiXinXiaoLei] change file
      674eb2a4
    • zsxwing's avatar
      [SPARK-6602][Core]Replace Akka Serialization with Spark Serializer · b9a922e2
      zsxwing authored
      Replace Akka Serialization with Spark Serializer and add unit tests.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #7159 from zsxwing/remove-akka-serialization and squashes the following commits:
      
      fc0fca3 [zsxwing] Merge branch 'master' into remove-akka-serialization
      cf81a58 [zsxwing] Fix the code style
      73251c6 [zsxwing] Add test scope
      9ef4af9 [zsxwing] Add AkkaRpcEndpointRef.hashCode
      433115c [zsxwing] Remove final
      be3edb0 [zsxwing] Support deserializing RpcEndpointRef
      ecec410 [zsxwing] Replace Akka Serialization with Spark Serializer
      b9a922e2
    • Feynman Liang's avatar
      [SPARK-9005] [MLLIB] Fix RegressionMetrics computation of explainedVariance · 536533ca
      Feynman Liang authored
      Fixes implementation of `explainedVariance` and `r2` to be consistent with their definitions as described in [SPARK-9005](https://issues.apache.org/jira/browse/SPARK-9005).
      
      Author: Feynman Liang <fliang@databricks.com>
      
      Closes #7361 from feynmanliang/SPARK-9005-RegressionMetrics-bugs and squashes the following commits:
      
      f1112fc [Feynman Liang] Add explainedVariance formula
      1a3d098 [Feynman Liang] SROwen code review comments
      08a0e1b [Feynman Liang] Fix pyspark tests
      db8605a [Feynman Liang] Style fix
      bde9761 [Feynman Liang] Fix RegressionMetrics tests, relax assumption predictor is unbiased
      c235de0 [Feynman Liang] Fix RegressionMetrics tests
      4c4e56f [Feynman Liang] Fix RegressionMetrics computation of explainedVariance and r2
      536533ca
    • Steve Loughran's avatar
      SPARK-9070 JavaDataFrameSuite teardown NPEs if setup failed · ec9b6216
      Steve Loughran authored
      fix teardown to skip table delete if hive context is null
      
      Author: Steve Loughran <stevel@hortonworks.com>
      
      Closes #7425 from steveloughran/stevel/patches/SPARK-9070-JavaDataFrameSuite-NPE and squashes the following commits:
      
      1982d38 [Steve Loughran] SPARK-9070 JavaDataFrameSuite teardown NPEs if setup failed
      ec9b6216
    • Shuo Xiang's avatar
      [SPARK-7555] [DOCS] Add doc for elastic net in ml-guide and mllib-guide · 303c1201
      Shuo Xiang authored
      jkbradley I put the elastic net under the **Algorithm guide** section. Also add the formula of elastic net in mllib-linear `mllib-linear-methods#regularizers`.
      
      dbtsai I left the code tab for you to add example code. Do you think it is the right place?
      
      Author: Shuo Xiang <shuoxiangpub@gmail.com>
      
      Closes #6504 from coderxiang/elasticnet and squashes the following commits:
      
      f6061ee [Shuo Xiang] typo
      90a7c88 [Shuo Xiang] Merge remote-tracking branch 'upstream/master' into elasticnet
      0610a36 [Shuo Xiang] move out the elastic net to ml-linear-methods
      8747190 [Shuo Xiang] merge master
      706d3f7 [Shuo Xiang] add python code
      9bc2b4c [Shuo Xiang] typo
      db32a60 [Shuo Xiang] java code sample
      aab3b3a [Shuo Xiang] Merge remote-tracking branch 'upstream/master' into elasticnet
      a0dae07 [Shuo Xiang] simplify code
      d8616fd [Shuo Xiang] Update the definition of elastic net. Add scala code; Mention Lasso and Ridge
      df5bd14 [Shuo Xiang] use wikipeida page in ml-linear-methods.md
      78d9366 [Shuo Xiang] address comments
      8ce37c2 [Shuo Xiang] Merge branch 'elasticnet' of github.com:coderxiang/spark into elasticnet
      8f24848 [Shuo Xiang] Merge branch 'elastic-net-doc' of github.com:coderxiang/spark into elastic-net-doc
      998d766 [Shuo Xiang] Merge branch 'elastic-net-doc' of github.com:coderxiang/spark into elastic-net-doc
      89f10e4 [Shuo Xiang] Merge remote-tracking branch 'upstream/master' into elastic-net-doc
      9262a72 [Shuo Xiang] update
      7e07d12 [Shuo Xiang] update
      b32f21a [Shuo Xiang] add doc for elastic net in sparkml
      937eef1 [Shuo Xiang] Merge remote-tracking branch 'upstream/master' into elastic-net-doc
      180b496 [Shuo Xiang] Merge remote-tracking branch 'upstream/master'
      aa0717d [Shuo Xiang] Merge remote-tracking branch 'upstream/master'
      5f109b4 [Shuo Xiang] Merge remote-tracking branch 'upstream/master'
      c5c5bfe [Shuo Xiang] Merge remote-tracking branch 'upstream/master'
      98804c9 [Shuo Xiang] fix bug in topBykey and update test
      303c1201
    • Liang-Chi Hsieh's avatar
      [Minor][SQL] Allow spaces in the beginning and ending of string for Interval · 9716a727
      Liang-Chi Hsieh authored
      This is a minor fixing for #7355 to allow spaces in the beginning and ending of string parsed to `Interval`.
      
      Author: Liang-Chi Hsieh <viirya@appier.com>
      
      Closes #7390 from viirya/fix_interval_string and squashes the following commits:
      
      9eb6831 [Liang-Chi Hsieh] Use trim instead of modifying regex.
      57861f7 [Liang-Chi Hsieh] Fix scala style.
      815a9cb [Liang-Chi Hsieh] Slightly modify regex to allow spaces in the beginning and ending of string.
      9716a727
    • zhichao.li's avatar
      [SPARK-8221][SQL]Add pmod function · a9385271
      zhichao.li authored
      https://issues.apache.org/jira/browse/SPARK-8221
      
      One concern is the result would be negative if the divisor is not positive( i.e pmod(7, -3) ), but the behavior is the same as hive.
      
      Author: zhichao.li <zhichao.li@intel.com>
      
      Closes #6783 from zhichao-li/pmod2 and squashes the following commits:
      
      7083eb9 [zhichao.li] update to the latest type checking
      d26dba7 [zhichao.li] add pmod
      a9385271
    • Wenchen Fan's avatar
      [SPARK-9020][SQL] Support mutable state in code gen expressions · fa4ec360
      Wenchen Fan authored
      We can keep expressions' mutable states in generated class(like `SpecificProjection`) as member variables, so that we can read and modify them inside codegened expressions.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #7392 from cloud-fan/mutable-state and squashes the following commits:
      
      eb3a221 [Wenchen Fan] fix order
      73144d8 [Wenchen Fan] naming improvement
      318f41d [Wenchen Fan] address more comments
      d43b65d [Wenchen Fan] address comments
      fd45c7a [Wenchen Fan] Support mutable state in code gen expressions
      fa4ec360
    • Liang-Chi Hsieh's avatar
      [SPARK-8840] [SPARKR] Add float coercion on SparkR · 6f690259
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-8840
      
      Currently the type coercion rules don't include float type. This PR simply adds it.
      
      Author: Liang-Chi Hsieh <viirya@appier.com>
      
      Closes #7280 from viirya/add_r_float_coercion and squashes the following commits:
      
      c86dc0e [Liang-Chi Hsieh] For comments.
      dbf0c1b [Liang-Chi Hsieh] Implicitly convert Double to Float based on provided schema.
      733015a [Liang-Chi Hsieh] Add test case for DataFrame with float type.
      30c2a40 [Liang-Chi Hsieh] Update test case.
      52b5294 [Liang-Chi Hsieh] Merge remote-tracking branch 'upstream/master' into add_r_float_coercion
      6f9159d [Liang-Chi Hsieh] Add another test case.
      8db3244 [Liang-Chi Hsieh] schema also needs to support float. add test case.
      0dcc992 [Liang-Chi Hsieh] Add float coercion on SparkR.
      6f690259
    • MechCoder's avatar
      [SPARK-8706] [PYSPARK] [PROJECT INFRA] Add pylint checks to PySpark · 20bb10f8
      MechCoder authored
      This adds Pylint checks to PySpark.
      
      For now this lazy installs using easy_install to /dev/pylint (similar to the pep8 script).
      We still need to figure out what rules to be allowed.
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #7241 from MechCoder/pylint and squashes the following commits:
      
      2fc7291 [MechCoder] Remove pylint test fail
      6d883a2 [MechCoder] Silence warnings and make pylint tests fail to check if it works in jenkins
      f3a5e17 [MechCoder] undefined-variable
      ca8b749 [MechCoder] Minor changes
      71629f8 [MechCoder] remove trailing whitespace
      8498ff9 [MechCoder] Remove blacklisted arguments and pointless statements check
      1dbd094 [MechCoder] Disable all checks for now
      8b8aa8a [MechCoder] Add pylint configuration file
      7871bb1 [MechCoder] [SPARK-8706] [PySpark] [Project infra] Add pylint checks to PySpark
      20bb10f8
    • zsxwing's avatar
      [SPARK-9012] [WEBUI] Escape Accumulators in the task table · adb33d36
      zsxwing authored
      If running the following codes, the task table will be broken because accumulators aren't escaped.
      ```
      val a = sc.accumulator(1, "<table>")
      sc.parallelize(1 to 10).foreach(i => a += i)
      ```
      
      Before this fix,
      
      <img width="1348" alt="screen shot 2015-07-13 at 8 02 44 pm" src="https://cloud.githubusercontent.com/assets/1000778/8649295/b17c491e-299b-11e5-97ee-4e6a64074c4f.png">
      
      After this fix,
      
      <img width="1355" alt="screen shot 2015-07-13 at 8 14 32 pm" src="https://cloud.githubusercontent.com/assets/1000778/8649337/f9e9c9ec-299b-11e5-927e-35c0a2f897f5.png">
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #7369 from zsxwing/SPARK-9012 and squashes the following commits:
      
      a83c9b6 [zsxwing] Escape Accumulators in the task table
      adb33d36
    • Reynold Xin's avatar
      [HOTFIX][SQL] Unit test breaking. · 14935d84
      Reynold Xin authored
      14935d84
    • Feynman Liang's avatar
      [SPARK-8997] [MLLIB] Performance improvements in LocalPrefixSpan · 1bb8accb
      Feynman Liang authored
      Improves the performance of LocalPrefixSpan by implementing optimizations proposed in [SPARK-8997](https://issues.apache.org/jira/browse/SPARK-8997)
      
      Author: Feynman Liang <fliang@databricks.com>
      Author: Feynman Liang <feynman.liang@gmail.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #7360 from feynmanliang/SPARK-8997-improve-prefixspan and squashes the following commits:
      
      59db2f5 [Feynman Liang] Merge pull request #1 from mengxr/SPARK-8997
      91e4357 [Xiangrui Meng] update LocalPrefixSpan impl
      9212256 [Feynman Liang] MengXR code review comments
      f055d82 [Feynman Liang] Fix failing scalatest
      2e00cba [Feynman Liang] Depth first projections
      70b93e3 [Feynman Liang] Performance improvements in LocalPrefixSpan, fix tests
      1bb8accb
    • Yijie Shen's avatar
      [SPARK-8279][SQL]Add math function round · f0e12974
      Yijie Shen authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-8279
      
      Author: Yijie Shen <henry.yijieshen@gmail.com>
      
      Closes #6938 from yijieshen/udf_round_3 and squashes the following commits:
      
      07a124c [Yijie Shen] remove useless def children
      392b65b [Yijie Shen] add negative scale test in DecimalSuite
      61760ee [Yijie Shen] address reviews
      302a78a [Yijie Shen] Add dataframe function test
      31dfe7c [Yijie Shen] refactor round to make it readable
      8c7a949 [Yijie Shen] rebase & inputTypes update
      9555e35 [Yijie Shen] tiny style fix
      d10be4a [Yijie Shen] use TypeCollection to specify wanted input and implicit cast
      c3b9839 [Yijie Shen] rely on implict cast to handle string input
      b0bff79 [Yijie Shen] make round's inner method's name more meaningful
      9bd6930 [Yijie Shen] revert accidental change
      e6f44c4 [Yijie Shen] refactor eval and genCode
      1b87540 [Yijie Shen] modify checkInputDataTypes using foldable
      5486b2d [Yijie Shen] DataFrame API modification
      2077888 [Yijie Shen] codegen versioned eval
      6cd9a64 [Yijie Shen] refactor Round's constructor
      9be894e [Yijie Shen] add round functions in o.a.s.sql.functions
      7c83e13 [Yijie Shen] more tests on round
      56db4bb [Yijie Shen] Add decimal support to Round
      7e163ae [Yijie Shen] style fix
      653d047 [Yijie Shen] Add math function round
      f0e12974
    • FlytxtRnD's avatar
      [SPARK-8018] [MLLIB] KMeans should accept initial cluster centers as param · 3f6296fe
      FlytxtRnD authored
       This allows Kmeans to be initialized using an existing set of cluster centers provided as  a KMeansModel object. This mode of initialization performs a single run.
      
      Author: FlytxtRnD <meethu.mathew@flytxt.com>
      
      Closes #6737 from FlytxtRnD/Kmeans-8018 and squashes the following commits:
      
      94b56df [FlytxtRnD] style correction
      ef95ee2 [FlytxtRnD] style correction
      c446c58 [FlytxtRnD] documentation and numRuns warning change
      06d13ef [FlytxtRnD] numRuns corrected
      d12336e [FlytxtRnD] numRuns variable modifications
      07f8554 [FlytxtRnD] remove setRuns from setIntialModel
      e721dfe [FlytxtRnD] Merge remote-tracking branch 'upstream/master' into Kmeans-8018
      242ead1 [FlytxtRnD] corrected == to === in assert
      714acb5 [FlytxtRnD] added numRuns
      60c8ce2 [FlytxtRnD] ignore runs parameter and initialModel test suite changed
      582e6d9 [FlytxtRnD] Merge remote-tracking branch 'upstream/master' into Kmeans-8018
      3f5fc8e [FlytxtRnD] test case modified and one runs condition added
      cd5dc5c [FlytxtRnD] Merge remote-tracking branch 'upstream/master' into Kmeans-8018
      16f1b53 [FlytxtRnD] Merge branch 'Kmeans-8018', remote-tracking branch 'upstream/master' into Kmeans-8018
      e9c35d7 [FlytxtRnD] Remove getInitialModel and match cluster count criteria
      6959861 [FlytxtRnD] Accept initial cluster centers in KMeans
      3f6296fe
    • Yu ISHIKAWA's avatar
      [SPARK-6259] [MLLIB] Python API for LDA · 46927696
      Yu ISHIKAWA authored
      I implemented the Python API for LDA. But I didn't implemented a method for `LDAModel.describeTopics()`, beause it's a little hard to implement it now. And adding document about that and an example code would fit for another issue.
      
      TODO: LDAModel.describeTopics() in Python must be also implemented. But it would be nice to fit for another issue. Implementing it is a little hard, since the return value of `describeTopics` in Scala consists of Tuple classes.
      
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      
      Closes #6791 from yu-iskw/SPARK-6259 and squashes the following commits:
      
      6855f59 [Yu ISHIKAWA] LDA inherits object
      28bd165 [Yu ISHIKAWA] Change the place of testing code
      d7a332a [Yu ISHIKAWA] Remove the doc comment about the optimizer's default value
      083e226 [Yu ISHIKAWA] Add the comment about the supported values and the default value of `optimizer`
      9f8bed8 [Yu ISHIKAWA] Simplify casting
      faa9764 [Yu ISHIKAWA] Add some comments for the LDA paramters
      98f645a [Yu ISHIKAWA] Remove the interface for `describeTopics`. Because it is not implemented.
      57ac03d [Yu ISHIKAWA] Remove the unnecessary import in Python unit testing
      73412c3 [Yu ISHIKAWA] Fix the typo
      2278829 [Yu ISHIKAWA] Fix the indentation
      39514ec [Yu ISHIKAWA] Modify how to cast the input data
      8117e18 [Yu ISHIKAWA] Fix the validation problems by `lint-scala`
      77fd1b7 [Yu ISHIKAWA] Not use LabeledPoint
      68f0653 [Yu ISHIKAWA] Support some parameters for `ALS.train()` in Python
      25ef2ac [Yu ISHIKAWA] Resolve conflicts with rebasing
      46927696
    • Michael Armbrust's avatar
      Revert SPARK-6910 and SPARK-9027 · c6b1a9e7
      Michael Armbrust authored
      Revert #7216 and #7386.  These patch seems to be causing quite a few test failures:
      
      ```
      Caused by: java.lang.reflect.InvocationTargetException
      	at sun.reflect.GeneratedMethodAccessor322.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:606)
      	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:351)
      	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getPartitionsByFilter$1.apply(ClientWrapper.scala:320)
      	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getPartitionsByFilter$1.apply(ClientWrapper.scala:318)
      	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:180)
      	at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:135)
      	at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:172)
      	at org.apache.spark.sql.hive.client.ClientWrapper.getPartitionsByFilter(ClientWrapper.scala:318)
      	at org.apache.spark.sql.hive.client.HiveTable.getPartitions(ClientInterface.scala:78)
      	at org.apache.spark.sql.hive.MetastoreRelation.getHiveQlPartitions(HiveMetastoreCatalog.scala:670)
      	at org.apache.spark.sql.hive.execution.HiveTableScan.doExecute(HiveTableScan.scala:137)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:90)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:90)
      	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
      	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:89)
      	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:164)
      	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:151)
      	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
      	... 85 more
      Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
      	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:185)
      	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:452)
      	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:357)
      	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:279)
      	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$TreeNode.generateJDOFilter(ExpressionTree.java:243)
      	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:590)
      	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:2417)
      	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2029)
      	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:146)
      	at org.apache.hadoop.hive.metastore.ObjectStore$4.getJdoResult(ObjectStore.java:2332)
      ```
      https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Test/job/Spark-Master-Maven-with-YARN/2945/HADOOP_PROFILE=hadoop-2.4,label=centos/testReport/junit/org.apache.spark.sql.hive.execution/SortMergeCompatibilitySuite/auto_sortmerge_join_16/
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #7409 from marmbrus/revertMetastorePushdown and squashes the following commits:
      
      92fabd3 [Michael Armbrust] Revert SPARK-6910 and SPARK-9027
      5d3bdf2 [Michael Armbrust] Revert "[SPARK-9027] [SQL] Generalize metastore predicate pushdown"
      c6b1a9e7
    • Reynold Xin's avatar
      [SPARK-8993][SQL] More comprehensive type checking in expressions. · f23a721c
      Reynold Xin authored
      This patch makes the following changes:
      
      1. ExpectsInputTypes only defines expected input types, but does not perform any implicit type casting.
      2. ImplicitCastInputTypes is a new trait that defines both expected input types, as well as performs implicit type casting.
      3. BinaryOperator has a new abstract function "inputType", which defines the expected input type for both left/right. Concrete BinaryOperator expressions no longer perform any implicit type casting.
      4. For BinaryOperators, convert NullType (i.e. null literals) into some accepted type so BinaryOperators don't need to handle NullTypes.
      
      TODOs needed: fix unit tests for error reporting.
      
      I'm intentionally not changing anything in aggregate expressions because yhuai is doing a big refactoring on that right now.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7348 from rxin/typecheck and squashes the following commits:
      
      8fcf814 [Reynold Xin] Fixed ordering of cases.
      3bb63e7 [Reynold Xin] Style fix.
      f45408f [Reynold Xin] Comment update.
      aa7790e [Reynold Xin] Moved RemoveNullTypes into ImplicitTypeCasts.
      438ea07 [Reynold Xin] space
      d55c9e5 [Reynold Xin] Removes NullTypes.
      360d124 [Reynold Xin] Fixed the rule.
      fb66657 [Reynold Xin] Convert NullType into some accepted type for BinaryOperators.
      2e22330 [Reynold Xin] Fixed unit tests.
      4932d57 [Reynold Xin] Style fix.
      d061691 [Reynold Xin] Rename existing ExpectsInputTypes -> ImplicitCastInputTypes.
      e4727cc [Reynold Xin] BinaryOperator should not be doing implicit cast.
      d017861 [Reynold Xin] Improve expression type checking.
      f23a721c
    • Sun Rui's avatar
      [SPARK-8808] [SPARKR] Fix assignments in SparkR. · f650a005
      Sun Rui authored
      Author: Sun Rui <rui.sun@intel.com>
      
      Closes #7395 from sun-rui/SPARK-8808 and squashes the following commits:
      
      ce603bc [Sun Rui] Use '<-' instead of '='.
      88590b1 [Sun Rui] Use '<-' instead of '='.
      f650a005
  3. Jul 14, 2015
    • Patrick Wendell's avatar
      5572fd0c
    • jerryshao's avatar
      [SPARK-5523] [CORE] [STREAMING] Add a cache for hostname in TaskMetrics to... · bb870e72
      jerryshao authored
      [SPARK-5523] [CORE] [STREAMING] Add a cache for hostname in TaskMetrics to decrease the memory usage and GC overhead
      
      Hostname in TaskMetrics will be created through deserialization, mostly the number of hostname is only the order of number of cluster node, so adding a cache layer to dedup the object could reduce the memory usage and alleviate GC overhead, especially for long-running and fast job generation applications like Spark Streaming.
      
      Author: jerryshao <saisai.shao@intel.com>
      Author: Saisai Shao <saisai.shao@intel.com>
      
      Closes #5064 from jerryshao/SPARK-5523 and squashes the following commits:
      
      3e2412a [jerryshao] Address the comments
      b092a81 [Saisai Shao] Add a pool to cache the hostname
      bb870e72
    • huangzhaowei's avatar
      [SPARK-8820] [STREAMING] Add a configuration to set checkpoint dir. · f957796c
      huangzhaowei authored
      Add a configuration to set checkpoint directory  for convenience to user.
      [Jira Address](https://issues.apache.org/jira/browse/SPARK-8820)
      
      Author: huangzhaowei <carlmartinmax@gmail.com>
      
      Closes #7218 from SaintBacchus/SPARK-8820 and squashes the following commits:
      
      d49fe4b [huangzhaowei] Rename the configuration name
      66ea47c [huangzhaowei] Add the unit test.
      dd0acc1 [huangzhaowei] [SPARK-8820][Streaming] Add a configuration to set checkpoint dir.
      f957796c
    • Josh Rosen's avatar
      [SPARK-9050] [SQL] Remove unused newOrdering argument from Exchange (cleanup after SPARK-8317) · cc57d705
      Josh Rosen authored
      SPARK-8317 changed the SQL Exchange operator so that it no longer pushed sorting into Spark's shuffle layer, a change which allowed more efficient SQL-specific sorters to be used.
      
      This patch performs some leftover cleanup based on those changes:
      
      - Exchange's constructor should no longer accept a `newOrdering` since it's no longer used and no longer works as expected.
      - `addOperatorsIfNecessary` looked at shuffle input's output ordering to decide whether to sort, but this is the wrong node to be examining: it needs to look at whether the post-shuffle node has the right ordering, since shuffling will not preserve row orderings.  Thanks to davies for spotting this.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7407 from JoshRosen/SPARK-9050 and squashes the following commits:
      
      e70be50 [Josh Rosen] No need to wrap line
      e866494 [Josh Rosen] Refactor addOperatorsIfNecessary to make code clearer
      2e467da [Josh Rosen] Remove `newOrdering` from Exchange.
      cc57d705
    • Josh Rosen's avatar
      [SPARK-9045] Fix Scala 2.11 build break in UnsafeExternalRowSorter · e965a798
      Josh Rosen authored
      This fixes a compilation break in under Scala 2.11:
      
      ```
      [error] /home/jenkins/workspace/Spark-Master-Scala211-Compile/sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java:135: error: <anonymous org.apache.spark.sql.execution.UnsafeExternalRowSorter$1> is not abstract and does not override abstract method <B>minBy(Function1<InternalRow,B>,Ordering<B>) in TraversableOnce
      [error]       return new AbstractScalaRowIterator() {
      [error]                                             ^
      [error]   where B,A are type-variables:
      [error]     B extends Object declared in method <B>minBy(Function1<A,B>,Ordering<B>)
      [error]     A extends Object declared in interface TraversableOnce
      [error] 1 error
      ```
      
      The workaround for this is to make `AbstractScalaRowIterator` into a concrete class.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7405 from JoshRosen/SPARK-9045 and squashes the following commits:
      
      cbcbb4c [Josh Rosen] Forgot that we can't use the ??? operator anymore
      577ba60 [Josh Rosen] [SPARK-9045] Fix Scala 2.11 build break in UnsafeExternalRowSorter.
      e965a798
    • Josh Rosen's avatar
      [SPARK-8962] Add Scalastyle rule to ban direct use of Class.forName; fix existing uses · 11e5c372
      Josh Rosen authored
      This pull request adds a Scalastyle regex rule which fails the style check if `Class.forName` is used directly.  `Class.forName` always loads classes from the default / system classloader, but in a majority of cases, we should be using Spark's own `Utils.classForName` instead, which tries to load classes from the current thread's context classloader and falls back to the classloader which loaded Spark when the context classloader is not defined.
      
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/7350)
      <!-- Reviewable:end -->
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7350 from JoshRosen/ban-Class.forName and squashes the following commits:
      
      e3e96f7 [Josh Rosen] Merge remote-tracking branch 'origin/master' into ban-Class.forName
      c0b7885 [Josh Rosen] Hopefully fix the last two cases
      d707ba7 [Josh Rosen] Fix uses of Class.forName that I missed in my first cleanup pass
      046470d [Josh Rosen] Merge remote-tracking branch 'origin/master' into ban-Class.forName
      62882ee [Josh Rosen] Fix uses of Class.forName or add exclusion.
      d9abade [Josh Rosen] Add stylechecker rule to ban uses of Class.forName
      11e5c372
    • Sean Owen's avatar
      [SPARK-4362] [MLLIB] Make prediction probability available in NaiveBayesModel · 740b034f
      Sean Owen authored
      Add predictProbabilities to Naive Bayes, return class probabilities.
      
      Continues https://github.com/apache/spark/pull/6761
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #7376 from srowen/SPARK-4362 and squashes the following commits:
      
      23d5a76 [Sean Owen] Fix model.labels -> model.theta
      95d91fb [Sean Owen] Check that predicted probabilities sum to 1
      b32d1c8 [Sean Owen] Add predictProbabilities to Naive Bayes, return class probabilities
      740b034f
    • Liang-Chi Hsieh's avatar
      [SPARK-8800] [SQL] Fix inaccurate precision/scale of Decimal division operation · 4b5cfc98
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-8800
      
      Previously, we turn to Java BigDecimal's divide with specified ROUNDING_MODE to avoid non-terminating decimal expansion problem. However, as JihongMA reported, for the division operation on some specific values, we get inaccurate results.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #7212 from viirya/fix_decimal4 and squashes the following commits:
      
      4205a0a [Liang-Chi Hsieh] Fix inaccuracy precision/scale of Decimal division operation.
      4b5cfc98
    • zsxwing's avatar
      [SPARK-4072] [CORE] Display Streaming blocks in Streaming UI · fb1d06fc
      zsxwing authored
      Replace #6634
      
      This PR adds `SparkListenerBlockUpdated` to SparkListener so that it can monitor all block update infos that are sent to `BlockManagerMasaterEndpoint`, and also add new tables in the Storage tab to display the stream block infos.
      
      ![screen shot 2015-07-01 at 5 19 46 pm](https://cloud.githubusercontent.com/assets/1000778/8451562/c291a6ec-2016-11e5-890d-0afc174e1f8c.png)
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #6672 from zsxwing/SPARK-4072-2 and squashes the following commits:
      
      df2c1d8 [zsxwing] Use xml query to check the xml elements
      54d54af [zsxwing] Add unit tests for StoragePage
      e29fb53 [zsxwing] Update as per TD's comments
      ccbee07 [zsxwing] Fix the code style
      6dc42b4 [zsxwing] Fix the replication level of blocks
      450fad1 [zsxwing] Merge branch 'master' into SPARK-4072-2
      1e9ef52 [zsxwing] Don't categorize by Executor ID
      ca0ab69 [zsxwing] Fix the code style
      3de2762 [zsxwing] Make object BlockUpdatedInfo private
      e95b594 [zsxwing] Add 'Aggregated Stream Block Metrics by Executor' table
      ba5d0d1 [zsxwing] Refactor the unit test to improve the readability
      4bbe341 [zsxwing] Revert JsonProtocol and don't log SparkListenerBlockUpdated
      b464dd1 [zsxwing] Add onBlockUpdated to EventLoggingListener
      5ba014c [zsxwing] Fix the code style
      0b1e47b [zsxwing] Add a developer api BlockUpdatedInfo
      04838a9 [zsxwing] Fix the code style
      2baa161 [zsxwing] Add unit tests
      80f6c6d [zsxwing] Address comments
      797ee4b [zsxwing] Display Streaming blocks in Streaming UI
      fb1d06fc
Loading