Skip to content
Snippets Groups Projects
  1. Feb 05, 2015
    • GuoQiang Li's avatar
      [SPARK-5474][Build]curl should support URL redirection in build/mvn · 34147549
      GuoQiang Li authored
      Author: GuoQiang Li <witgo@qq.com>
      
      Closes #4263 from witgo/SPARK-5474 and squashes the following commits:
      
      ef397ff [GuoQiang Li] review commits
      a398324 [GuoQiang Li] curl should support URL redirection in build/mvn
      34147549
    • Matei Zaharia's avatar
      [SPARK-5608] Improve SEO of Spark documentation pages · 4d74f060
      Matei Zaharia authored
      - Add meta description tags on some of the most important doc pages
      - Shorten the titles of some pages to have more relevant keywords; for
        example there's no reason to have "Spark SQL Programming Guide - Spark
        1.2.0 documentation", we can just say "Spark SQL - Spark 1.2.0
        documentation".
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #4381 from mateiz/docs-seo and squashes the following commits:
      
      4940563 [Matei Zaharia] [SPARK-5608] Improve SEO of Spark documentation pages
      4d74f060
    • Sandy Ryza's avatar
      SPARK-4687. Add a recursive option to the addFile API · c4b1108c
      Sandy Ryza authored
      This adds a recursive option to the addFile API to satisfy Hive's needs.  It only allows specifying HDFS dirs that will be copied down on every executor.
      
      There are a couple outstanding questions.
      * Should we allow specifying local dirs as well?  The best way to do this would probably be to archive them.  The drawback is that it would require a fair bit of code that I don't know of any current use cases for.
      * The addFiles implementation has a caching component that I don't entirely understand.  What events are we caching between?  AFAICT it's users calling addFile on the same file in the same app at different times?  Do we want/need to add something similar for addDirectory.
      *  The addFiles implementation will check to see if an added file already exists and has the same contents.  I imagine we want the same behavior, so planning to add this unless people think otherwise.
      
      I plan to add some tests if people are OK with the approach.
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #3670 from sryza/sandy-spark-4687 and squashes the following commits:
      
      f9fc77f [Sandy Ryza] Josh's comments
      70cd24d [Sandy Ryza] Add another test
      13da824 [Sandy Ryza] Revert executor changes
      38bf94d [Sandy Ryza] Marcelo's comments
      ca83849 [Sandy Ryza] Add addFile test
      1941be3 [Sandy Ryza] Fix test and avoid HTTP server in local mode
      31f15a9 [Sandy Ryza] Use cache recursively and fix some compile errors
      0239c3d [Sandy Ryza] Change addDirectory to addFile with recursive
      46fe70a [Sandy Ryza] SPARK-4687. Add a addDirectory API
      c4b1108c
    • Reynold Xin's avatar
      [HOTFIX] MLlib build break. · 6580929f
      Reynold Xin authored
      6580929f
    • Reynold Xin's avatar
      [MLlib] Minor: UDF style update. · c3ba4d4c
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4388 from rxin/mllib-style and squashes the following commits:
      
      61d465b [Reynold Xin] oops
      3364295 [Reynold Xin] Missed one ..
      5e068e3 [Reynold Xin] [MLlib] Minor: UDF style update.
      c3ba4d4c
    • Reynold Xin's avatar
      [SPARK-5612][SQL] Move DataFrame implicit functions into SQLContext.implicits. · 7d789e11
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4386 from rxin/df-implicits and squashes the following commits:
      
      9d96606 [Reynold Xin] style fix
      edd296b [Reynold Xin] ReplSuite
      1c946ab [Reynold Xin] [SPARK-5612][SQL] Move DataFrame implicit functions into SQLContext.implicits.
      7d789e11
    • q00251598's avatar
      [SPARK-5606][SQL] Support plus sign in HiveContext · 9d3a75ef
      q00251598 authored
      Now spark version is only support ```SELECT -key FROM DECIMAL_UDF;``` in HiveContext.
      This patch is used to support ```SELECT +key FROM DECIMAL_UDF;``` in HiveContext.
      
      Author: q00251598 <qiyadong@huawei.com>
      
      Closes #4378 from watermen/SPARK-5606 and squashes the following commits:
      
      777f132 [q00251598] sql-case22
      74dd368 [q00251598] sql-case22
      1a67410 [q00251598] sql-case22
      c5cd5bc [q00251598] sql-case22
      9d3a75ef
    • Xiangrui Meng's avatar
      [SPARK-5599] Check MLlib public APIs for 1.3 · db346904
      Xiangrui Meng authored
      There are no break changes (against 1.2) in this PR. I hide the PythonMLLibAPI, which is only called by Py4J, and renamed `SparseMatrix.diag` to `SparseMatrix.spdiag`. All other changes are documentation and annotations. The `Experimental` tag is removed from `ALS.setAlpha` and `Rating`. One issue not addressed in this PR is the `setCheckpointDir` in `LDA` (https://issues.apache.org/jira/browse/SPARK-5604).
      
      CC: srowen jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #4377 from mengxr/SPARK-5599 and squashes the following commits:
      
      17975dc [Xiangrui Meng] fix tests
      4487f20 [Xiangrui Meng] remove experimental tag from each stat method because Statistics is experimental already
      3cd969a [Xiangrui Meng] remove freeman (sorry~) from StreamLA public doc
      55900f5 [Xiangrui Meng] make IR experimental and update its doc
      9b8eed3 [Xiangrui Meng] graduate Rating and setAlpha in ALS
      b854d28 [Xiangrui Meng] correct iid doc in RandomRDDs
      27f5bdd [Xiangrui Meng] update linalg docs and some new method signatures
      371721b [Xiangrui Meng] mark fpg as experimental and update its doc
      8aca7ee [Xiangrui Meng] change SLR to experimental and update the doc
      ebbb2e9 [Xiangrui Meng] mark PIC experimental and update the doc
      7830d3b [Xiangrui Meng] mark GMM experimental
      a378496 [Xiangrui Meng] use the correct subscript syntax in PIC
      c65c424 [Xiangrui Meng] update LDAModel doc
      a213b0c [Xiangrui Meng] update GMM constructor
      3993054 [Xiangrui Meng] hide algorithm in SLR
      ad6b9ce [Xiangrui Meng] Revert "make ClassificatinModel.predict(JavaRDD) return JavaDoubleRDD"
      0054684 [Xiangrui Meng] add doc to LRModel's constructor
      a89763b [Xiangrui Meng] make ClassificatinModel.predict(JavaRDD) return JavaDoubleRDD
      7c0946c [Xiangrui Meng] hide PythonMLLibAPI
      db346904
    • Joseph K. Bradley's avatar
      [SPARK-5596] [mllib] ML model import/export for GLMs, NaiveBayes · 975bcef4
      Joseph K. Bradley authored
      This is a PR for Parquet-based model import/export.  Please see the design doc on [the JIRA](https://issues.apache.org/jira/browse/SPARK-4587).
      
      Note: This includes only a subset of regression and classification models:
      * NaiveBayes, SVM, LogisticRegression
      * LinearRegression, RidgeRegression, Lasso
      
      Follow-up PRs will cover other models.
      
      Sketch of current contents:
      * New traits: Saveable, Loader
      * Implementations for some algorithms
      * Also: Added LogisticRegressionModel.getThreshold method (so that unit test could check the threshold)
      
      CC: mengxr  selvinsource
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #4233 from jkbradley/ml-import-export and squashes the following commits:
      
      87c4eb8 [Joseph K. Bradley] small cleanups
      12d9059 [Joseph K. Bradley] Many cleanups after code review.  Major changes: Storing numFeatures, numClasses in model metadata. Improvements to unit tests
      b4ee064 [Joseph K. Bradley] Reorganized save/load for regression and classification.  Renamed concepts to Saveable, Loader
      a34aef5 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into ml-import-export
      ee99228 [Joseph K. Bradley] scala style fix
      79675d5 [Joseph K. Bradley] cleanups in LogisticRegression after rebasing after multinomial PR
      d1e5882 [Joseph K. Bradley] organized imports
      2935963 [Joseph K. Bradley] Added save/load and tests for most classification and regression models
      c495dba [Joseph K. Bradley] made version for model import/export local to each model
      1496852 [Joseph K. Bradley] Added save/load for NaiveBayes
      8d46386 [Joseph K. Bradley] Added save/load to NaiveBayes
      1577d70 [Joseph K. Bradley] fixed issues after rebasing on master (DataFrame patch)
      64914a3 [Joseph K. Bradley] added getThreshold to SVMModel
      b1fc5ec [Joseph K. Bradley] small cleanups
      418ba1b [Joseph K. Bradley] Added save, load to mllib.classification.LogisticRegressionModel, plus test suite
      975bcef4
    • Patrick Wendell's avatar
      SPARK-5607: Update to Kryo 2.24.0 to avoid including objenesis 1.2. · c23ac03c
      Patrick Wendell authored
      Our existing Kryo version actually embeds objenesis 1.2 classes in
      its jar, causing dependency conflicts during tests. This updates us to
      Kryo 2.24.0 (which was changed to not embed objenesis) to avoid this
      behavior. See the JIRA for more detail.
      
      Author: Patrick Wendell <patrick@databricks.com>
      
      Closes #4383 from pwendell/SPARK-5607 and squashes the following commits:
      
      c3b8d27 [Patrick Wendell] SPARK-5607: Update to Kryo 2.24.0 to avoid including objenesis 1.2.
      c23ac03c
  2. Feb 04, 2015
    • Reynold Xin's avatar
      [SPARK-5602][SQL] Better support for creating DataFrame from local data collection · 84acd08e
      Reynold Xin authored
      1. Added methods to create DataFrames from Seq[Product]
      2. Added executeTake to avoid running a Spark job on LocalRelations.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4372 from rxin/localDataFrame and squashes the following commits:
      
      f696858 [Reynold Xin] style checker.
      839ef7f [Reynold Xin] [SPARK-5602][SQL] Better support for creating DataFrame from local data collection.
      84acd08e
    • Reynold Xin's avatar
      [SPARK-5538][SQL] Fix flaky CachedTableSuite · 206f9bc3
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4379 from rxin/CachedTableSuite and squashes the following commits:
      
      f2b44ce [Reynold Xin] [SQL] Fix flaky CachedTableSuite.
      206f9bc3
    • Reynold Xin's avatar
      [SQL][DataFrame] Minor cleanup. · 6b4c7f08
      Reynold Xin authored
      1. Removed LocalHiveContext in Python.
      2. Reduced DSL UDF support from 22 arguments to 10 arguments so JavaDoc/ScalaDoc look nicer.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4374 from rxin/df-style and squashes the following commits:
      
      e493342 [Reynold Xin] [SQL][DataFrame] Minor cleanup.
      6b4c7f08
    • Sadhan Sood's avatar
      [SPARK-4520] [SQL] This pr fixes the ArrayIndexOutOfBoundsException as r... · dba98bf6
      Sadhan Sood authored
      ...aised in SPARK-4520.
      
      The exception is thrown only for a thrift generated parquet file. The array element schema name is assumed as "array" as per ParquetAvro but for thrift generated parquet files, it is array_name + "_tuple". This leads to missing child of array group type and hence when the parquet rows are being materialized leads to the exception.
      
      Author: Sadhan Sood <sadhan@tellapart.com>
      
      Closes #4148 from sadhan/SPARK-4520 and squashes the following commits:
      
      c5ccde8 [Sadhan Sood] [SPARK-4520] [SQL] This pr fixes the ArrayIndexOutOfBoundsException as raised in SPARK-4520.
      dba98bf6
    • Reynold Xin's avatar
      [SPARK-5605][SQL][DF] Allow using String to specify colum name in DSL aggregate functions · 1fbd124b
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4376 from rxin/SPARK-5605 and squashes the following commits:
      
      c55f5fa [Reynold Xin] Added a Python test.
      f4b8dbb [Reynold Xin] [SPARK-5605][SQL][DF] Allow using String to specify colum name in DSL aggregate functions.
      1fbd124b
    • Josh Rosen's avatar
      [SPARK-5411] Allow SparkListeners to be specified in SparkConf and loaded when... · 9a7ce70e
      Josh Rosen authored
      [SPARK-5411] Allow SparkListeners to be specified in SparkConf and loaded when creating SparkContext
      
      This patch introduces a new configuration option, `spark.extraListeners`, that allows SparkListeners to be specified in SparkConf and registered before the SparkContext is initialized.  From the configuration documentation:
      
      > A comma-separated list of classes that implement SparkListener; when initializing SparkContext, instances of these classes will be created and registered with Spark's listener bus. If a class has a single-argument constructor that accepts a SparkConf, that constructor will be called; otherwise, a zero-argument constructor will be called. If no valid constructor can be found, the SparkContext creation will fail with an exception.
      
      This motivation for this patch is to allow monitoring code to be easily injected into existing Spark programs without having to modify those programs' code.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #4111 from JoshRosen/SPARK-5190-register-sparklistener-in-sc-constructor and squashes the following commits:
      
      8370839 [Josh Rosen] Two minor fixes after merging with master
      6e0122c [Josh Rosen] Merge remote-tracking branch 'origin/master' into SPARK-5190-register-sparklistener-in-sc-constructor
      1a5b9a0 [Josh Rosen] Remove SPARK_EXTRA_LISTENERS environment variable.
      2daff9b [Josh Rosen] Add a couple of explanatory comments for SPARK_EXTRA_LISTENERS.
      b9973da [Josh Rosen] Add test to ensure that conf and env var settings are merged, not overriden.
      d6f3113 [Josh Rosen] Use getConstructors() instead of try-catch to find right constructor.
      d0d276d [Josh Rosen] Move code into setupAndStartListenerBus() method
      b22b379 [Josh Rosen] Instantiate SparkListeners from classes listed in configurations.
      9c0d8f1 [Josh Rosen] Revert "[SPARK-5190] Allow SparkListeners to be registered before SparkContext starts."
      217ecc0 [Josh Rosen] Revert "Add addSparkListener to JavaSparkContext"
      25988f3 [Josh Rosen] Add addSparkListener to JavaSparkContext
      163ba19 [Josh Rosen] [SPARK-5190] Allow SparkListeners to be registered before SparkContext starts.
      9a7ce70e
    • Davies Liu's avatar
      [SPARK-5577] Python udf for DataFrame · dc101b0e
      Davies Liu authored
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4351 from davies/python_udf and squashes the following commits:
      
      d250692 [Davies Liu] fix conflict
      34234d4 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python_udf
      440f769 [Davies Liu] address comments
      f0a3121 [Davies Liu] track life cycle of broadcast
      f99b2e1 [Davies Liu] address comments
      462b334 [Davies Liu] Merge branch 'master' of github.com:apache/spark into python_udf
      7bccc3b [Davies Liu] python udf
      58dee20 [Davies Liu] clean up
      dc101b0e
    • guowei2's avatar
      [SPARK-5118][SQL] Fix: create table test stored as parquet as select .. · e0490e27
      guowei2 authored
      Author: guowei2 <guowei2@asiainfo.com>
      
      Closes #3921 from guowei2/SPARK-5118 and squashes the following commits:
      
      b1ba3be [guowei2] add table file check in test case
      9da56f8 [guowei2] test case only run in Shim13
      112a0b6 [guowei2] add test case
      187c7d8 [guowei2] Fix: create table test stored as parquet as select ..
      e0490e27
    • Yin Huai's avatar
      [SQL] Use HiveContext's sessionState in HiveMetastoreCatalog.hiveDefaultTableFilePath · 548c9c2b
      Yin Huai authored
      `client.getDatabaseCurrent` uses SessionState's local variable which can be an issue.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #4355 from yhuai/defaultTablePath and squashes the following commits:
      
      84a29e5 [Yin Huai] Use HiveContext's sessionState instead of using SessionState's thread local variable.
      548c9c2b
    • Yin Huai's avatar
      [SQL] Correct the default size of TimestampType and expose NumericType · 0d81645f
      Yin Huai authored
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #4314 from yhuai/minor and squashes the following commits:
      
      d3870a7 [Yin Huai] Update test.
      6e4b0c0 [Yin Huai] Two minor changes.
      0d81645f
    • OopsOutOfMemory's avatar
      [SQL][Hiveconsole] Bring hive console code up to date and update README.md · b73d5fff
      OopsOutOfMemory authored
      Add `import org.apache.spark.sql.Dsl._` to make DSL query works.
      Since queryExecution is not avaliable in DataFrame, so remove it.
      
      Author: OopsOutOfMemory <victorshengli@126.com>
      Author: Sheng, Li <OopsOutOfMemory@users.noreply.github.com>
      
      Closes #4330 from OopsOutOfMemory/hiveconsole and squashes the following commits:
      
      46eb790 [Sheng, Li] Update SparkBuild.scala
      d23ee9f [OopsOutOfMemory] minor
      d4dd593 [OopsOutOfMemory] refine hive console
      b73d5fff
    • wangfei's avatar
      [SPARK-5367][SQL] Support star expression in udfs · 417d1118
      wangfei authored
      A follow up for #4163: support  `select array(key, *) from src`
      
      Since  array(key, *)  will not go into this case
      ```
      case Alias(f  UnresolvedFunction(_, args), name) if containsStar(args) =>
                    val expandedArgs = args.flatMap {
                      case s: Star => s.expand(child.output, resolver)
                      case o => o :: Nil
                    }
      ```
      here added a case to cover the corner case of array.
      
      /cc liancheng
      
      Author: wangfei <wangfei1@huawei.com>
      Author: scwf <wangfei1@huawei.com>
      
      Closes #4353 from scwf/udf-star1 and squashes the following commits:
      
      4350d17 [wangfei] minor fix
      a7cd191 [wangfei] minor fix
      0942fb1 [wangfei] follow up: support select array(key, *) from src
      6ae00db [wangfei] also fix problem with array
      da1da09 [scwf] minor fix
      f87b5f9 [scwf] added test case
      587bf7e [wangfei] compile fix
      eb93c16 [wangfei] fix star resolve issue in udf
      417d1118
    • kul's avatar
      [SPARK-5426][SQL] Add SparkSQL Java API helper methods. · 424cb699
      kul authored
      Right now the PR adds few helper methods for java apis. But the issue was opened mainly to get rid of transformations in java api like `.rdd` and `.toJavaRDD` while working with `SQLContext` or `HiveContext`.
      
      Author: kul <kuldeep.bora@gmail.com>
      
      Closes #4243 from kul/master and squashes the following commits:
      
      2390fba [kul] [SPARK-5426][SQL] Add SparkSQL Java API helper methods.
      424cb699
    • wangfei's avatar
      [SPARK-5587][SQL] Support change database owner · b90dd397
      wangfei authored
      Support change database owner, here i do not add the golden files since the golden answer is related to the tmp dir path (see https://github.com/scwf/spark/commit/6331e4ac0f982caf70531defcb957be76fe093c7)
      
      Author: wangfei <wangfei1@huawei.com>
      
      Closes #4357 from scwf/db_owner and squashes the following commits:
      
      f761533 [wangfei] remove the alter_db_owner which have added to whitelist
      79413c6 [wangfei] Revert "added golden files"
      6331e4a [wangfei] added golden files
      6f7cacd [wangfei] support change database owner
      b90dd397
    • wangfei's avatar
      [SPARK-5591][SQL] Fix NoSuchObjectException for CTAS · a9f0db1f
      wangfei authored
      Now CTAS runs successfully but will throw a NoSuchObjectException.
      ```
      create table sc as select *
      from (select '2011-01-11', '2011-01-11+14:18:26' from src tablesample (1 rows)
      union all
      select '2011-01-11', '2011-01-11+15:18:26' from src tablesample (1 rows)
      union all
      select '2011-01-11', '2011-01-11+16:18:26' from src tablesample (1 rows) ) s;
      ```
      Get this exception:
      ERROR Hive: NoSuchObjectException(message:default.sc table not found)
      at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1560)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:601)
      at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
      at $Proxy8.get_table(Unknown Source)
      at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:997)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:601)
      at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
      at $Proxy9.getTable(Unknown Source)
      at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:976)
      at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:950)
      at org.apache.spark.sql.hive.HiveMetastoreCatalog.tableExists(HiveMetastoreCatalog.scala:152)
      at org.apache.spark.sql.hive.HiveContext$$anon$2.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$tableExists(HiveContext.scala:309)
      at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.tableExists(Catalog.scala:121)
      at org.apache.spark.sql.hive.HiveContext$$anon$2.tableExists(HiveContext.scala:309)
      at org.apache.spark.sql.hive.execution.CreateTableAsSelect.run(CreateTableAsSelect.scala:63)
      at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:53)
      
      Author: wangfei <wangfei1@huawei.com>
      
      Closes #4365 from scwf/ctas-exception and squashes the following commits:
      
      c7c67bc [wangfei] no used imports
      f54eb2a [wangfei] fix exception for CTAS
      a9f0db1f
    • Davies Liu's avatar
      [SPARK-4939] move to next locality when no pending tasks · 0a89b156
      Davies Liu authored
      Currently, if there are different locality in a task set, the tasks with NODE_LOCAL only get scheduled after all the PROCESS_LOCAL tasks are scheduled and timeout with spark.locality.wait.process (3 seconds by default). In local mode, the LocalScheduler will never call resourceOffer() again once it failed to get a task with same locality, then all the NODE_LOCAL tasks will be never scheduled.
      
      This bug could be reproduced by run example python/streaming/stateful_network_wordcount.py, it will hang after finished a batch with some data.
      
      This patch will check whether there is task for current locality level, if not, it will change to next locality level without waiting for `spark.locality.wait.process` seconds. It works for all locality levels.
      
      Because the list of pending tasks are updated lazily, the check can be false-positive, it means it will not move to next locality level even there is no valid pending tasks, it will wait for timeout.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #3779 from davies/local_streaming and squashes the following commits:
      
      2d25fb3 [Davies Liu] Update TaskSetManager.scala
      1550668 [Davies Liu] add comment
      1c37aac [Davies Liu] address comments
      6b13824 [Davies Liu] address comments
      906f456 [Davies Liu] Merge branch 'master' of github.com:apache/spark into local_streaming
      414e79e [Davies Liu] fix bug, add logging
      ff8eabb [Davies Liu] Merge branch 'master' into local_streaming
      28d1b3c [Davies Liu] check tasks
      9d0ceab [Davies Liu] Merge branch 'master' of github.com:apache/spark into local_streaming
      37a2804 [Davies Liu] fix tests
      49bda82 [Davies Liu] address comment
      d8fb95a [Davies Liu] move to next locality level if no more tasks
      2d6ae73 [Davies Liu] add comments
      32d363f [Davies Liu] add regression test
      7d8c5a5 [Davies Liu] jump to next locality if no pending tasks for executors
      0a89b156
    • Hari Shreedharan's avatar
      [SPARK-4707][STREAMING] Reliable Kafka Receiver can lose data if the blo... · f0500f9f
      Hari Shreedharan authored
      ...ck generator fails to store data.
      
      The Reliable Kafka Receiver commits offsets only when events are actually stored, which ensures that on restart we will actually start where we left off. But if the failure happens in the store() call, and the block generator reports an error the receiver does not do anything and will continue reading from the current offset and not the last commit. This means that messages between the last commit and the current offset will be lost.
      
      This PR retries the store call four times and then stops the receiver with an error message and the last exception that was received from the store.
      
      Author: Hari Shreedharan <hshreedharan@apache.org>
      
      Closes #3655 from harishreedharan/kafka-failure-fix and squashes the following commits:
      
      5e2e7ad [Hari Shreedharan] [SPARK-4704][STREAMING] Reliable Kafka Receiver can lose data if the block generator fails to store data.
      f0500f9f
    • cody koeninger's avatar
      [SPARK-4964] [Streaming] Exactly-once semantics for Kafka · b0c00219
      cody koeninger authored
      Author: cody koeninger <cody@koeninger.org>
      
      Closes #3798 from koeninger/kafkaRdd and squashes the following commits:
      
      1dc2941 [cody koeninger] [SPARK-4964] silence ConsumerConfig warnings about broker connection props
      59e29f6 [cody koeninger] [SPARK-4964] settle on "Direct" as a naming convention for the new stream
      8c31855 [cody koeninger] [SPARK-4964] remove HasOffsetRanges interface from return types
      0df3ebe [cody koeninger] [SPARK-4964] add comments per pwendell / dibbhatt
      8991017 [cody koeninger] [SPARK-4964] formatting
      825110f [cody koeninger] [SPARK-4964] rename stuff per TD
      4354bce [cody koeninger] [SPARK-4964] per td, remove java interfaces, replace with final classes, corresponding changes to KafkaRDD constructor and checkpointing
      9adaa0a [cody koeninger] [SPARK-4964] formatting
      0090553 [cody koeninger] [SPARK-4964] javafication of interfaces
      9a838c2 [cody koeninger] [SPARK-4964] code cleanup, add more tests
      2b340d8 [cody koeninger] [SPARK-4964] refactor per TD feedback
      80fd6ae [cody koeninger] [SPARK-4964] Rename createExactlyOnceStream so it isnt over-promising, change doc
      99d2eba [cody koeninger] [SPARK-4964] Reduce level of nesting.  If beginning is past end, its actually an error (may happen if Kafka topic was deleted and recreated)
      19406cc [cody koeninger] Merge branch 'master' of https://github.com/apache/spark into kafkaRdd
      2e67117 [cody koeninger] [SPARK-4964] one potential way of hiding most of the implementation, while still allowing access to offsets (but not subclassing)
      bb80bbe [cody koeninger] [SPARK-4964] scalastyle line length
      d4a7cf7 [cody koeninger] [SPARK-4964] allow for use cases that need to override compute for custom kafka dstreams
      c1bd6d9 [cody koeninger] [SPARK-4964] use newly available attemptNumber for correct retry behavior
      548d529 [cody koeninger] Merge branch 'master' of https://github.com/apache/spark into kafkaRdd
      0458e4e [cody koeninger] [SPARK-4964] recovery of generated rdds from checkpoint
      e86317b [cody koeninger] [SPARK-4964] try seed brokers in random order to spread metadata requests
      e93eb72 [cody koeninger] [SPARK-4964] refactor to add preferredLocations.  depends on SPARK-4014
      356c7cc [cody koeninger] [SPARK-4964] code cleanup per helena
      adf99a6 [cody koeninger] [SPARK-4964] fix serialization issues for checkpointing
      1d50749 [cody koeninger] [SPARK-4964] code cleanup per tdas
      8bfd6c0 [cody koeninger] [SPARK-4964] configure rate limiting via spark.streaming.receiver.maxRate
      e09045b [cody koeninger] [SPARK-4964] add foreachPartitionWithIndex, to avoid doing equivalent map + empty foreach boilerplate
      cac63ee [cody koeninger] additional testing, fix fencepost error
      37d3053 [cody koeninger] make KafkaRDDPartition available to users so offsets can be committed per partition
      bcca8a4 [cody koeninger] Merge branch 'master' of https://github.com/apache/spark into kafkaRdd
      6bf14f2 [cody koeninger] first attempt at a Kafka dstream that allows for exactly-once semantics
      326ff3c [cody koeninger] add some tests
      38bb727 [cody koeninger] give easy access to the parameters of a KafkaRDD
      979da25 [cody koeninger] dont allow empty leader offsets to be returned
      8d7de4a [cody koeninger] make sure leader offsets can be found even for leaders that arent in the seed brokers
      4b078bf [cody koeninger] differentiate between leader and consumer offsets in error message
      3c2a96a [cody koeninger] fix scalastyle errors
      29c6b43 [cody koeninger] cleanup logging
      783b477 [cody koeninger] update tests for kafka 8.1.1
      7d050bc [cody koeninger] methods to set consumer offsets and get topic metadata, switch back to inclusive start / exclusive end to match typical kafka consumer behavior
      ce91c59 [cody koeninger] method to get consumer offsets, explicit error handling
      4dafd1b [cody koeninger] method to get leader offsets, switch rdd bound to being exclusive start, inclusive end to match offsets typically returned from cluster
      0b94b33 [cody koeninger] use dropWhile rather than filter to trim beginning of fetch response
      1d70625 [cody koeninger] WIP on kafka cluster
      76913e2 [cody koeninger] Batch oriented kafka rdd, WIP. todo: cluster metadata / finding leader
      b0c00219
    • Davies Liu's avatar
      [SPARK-5588] [SQL] support select/filter by SQL expression · ac0b2b78
      Davies Liu authored
      ```
      df.selectExpr('a + 1', 'abs(age)')
      df.filter('age > 3')
      df[ df.age > 3 ]
      df[ ['age', 'name'] ]
      ```
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4359 from davies/select_expr and squashes the following commits:
      
      d99856b [Davies Liu] support select/filter by SQL expression
      ac0b2b78
    • Davies Liu's avatar
      [SPARK-5585] Flaky test in MLlib python · 38a416f0
      Davies Liu authored
      Add a seed for tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4358 from davies/flaky_test and squashes the following commits:
      
      02371c3 [Davies Liu] Merge branch 'master' of github.com:apache/spark into flaky_test
      ced499b [Davies Liu] add seed for test
      38a416f0
    • Imran Rashid's avatar
      [SPARK-5574] use given name prefix in dir · 5aa0f219
      Imran Rashid authored
      https://issues.apache.org/jira/browse/SPARK-5574
      
      very minor, doesn't effect external behavior at all.
      Note that after this change, some of these dirs no longer will have "spark" in the name at all.  I could change those locations that do pass in a name prefix to also include "spark", eg. "blockmgr" -> "spark-blockmgr"
      
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #4344 from squito/SPARK-5574 and squashes the following commits:
      
      33a84fe [Imran Rashid] use given name prefix in dir
      5aa0f219
    • Liang-Chi Hsieh's avatar
      [Minor] Fix incorrect warning log · a74cbbf1
      Liang-Chi Hsieh authored
      The warning log looks incorrect. Just fix it.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #4360 from viirya/fixing_typo and squashes the following commits:
      
      48fbe4f [Liang-Chi Hsieh] Fix incorrect warning log.
      a74cbbf1
    • zsxwing's avatar
      [SPARK-5379][Streaming] Add awaitTerminationOrTimeout · 4cf4cba0
      zsxwing authored
      Added `awaitTerminationOrTimeout` to return if the waiting time elapsed:
      * `true` if it's stopped.
      * `false` if the waiting time elapsed before returning from the method.
      * throw the reported error if it's thrown during the execution.
      
      Also deprecated `awaitTermination(timeout: Long)`.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #4171 from zsxwing/SPARK-5379 and squashes the following commits:
      
      c9e660b [zsxwing] Add a unit test for awaitTerminationOrTimeout
      8a89f92 [zsxwing] Add awaitTerminationOrTimeout to python
      cdc820b [zsxwing] Add awaitTerminationOrTimeout
      4cf4cba0
    • Burak Yavuz's avatar
      [SPARK-5341] Use maven coordinates as dependencies in spark-shell and spark-submit · 6aed719e
      Burak Yavuz authored
      This PR adds support for using maven coordinates as dependencies to spark-shell.
      Coordinates can be provided as a comma-delimited string after the flag `--packages`.
      Additional remote repositories (like SonaType) can be supplied as a comma-delimited string after the flag
      `--repositories`.
      
      Uses the Ivy library to resolve dependencies. Unfortunately the library has no decent documentation, therefore solving more complex dependency issues can be a problem.
      
      pwendell, mateiz, mengxr
      
      **Note: This is still a WIP. The following need to be handled:**
      - [x] add docs for the methods
      - [x] take local ivy cache path as an argument
      - [x] add tests
      - [x] add Windows compatibility
      - [x] exclude unused Ivy dependencies
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #4215 from brkyvz/SPARK-5341ivy and squashes the following commits:
      
      9215851 [Burak Yavuz] ready to merge
      db2a5cc [Burak Yavuz] changed logging to printStream
      9dae87f [Burak Yavuz] file separators changed
      71c374d [Burak Yavuz] merge conflicts fixed
      c08dc9f [Burak Yavuz] fixed merge conflicts
      3ada19a [Burak Yavuz] fixed Jenkins error (hopefully) and added comment on oro
      43c2290 [Burak Yavuz] fixed that ONE line
      231f72f [Burak Yavuz] addressed code review
      2cd6562 [Burak Yavuz] Merge branch 'master' of github.com:apache/spark into SPARK-5341ivy
      85ec5a3 [Burak Yavuz] added oro as a dependency explicitly
      ea44ca4 [Burak Yavuz] add oro back to dependencies
      cef0e24 [Burak Yavuz] IntelliJ is just messing things up
      97c4a92 [Burak Yavuz] fix more weird IntelliJ formatting
      9cf077d [Burak Yavuz] fix weird IntelliJ formatting
      dcf5e13 [Burak Yavuz] fix windows command line flags
      3a23f21 [Burak Yavuz] excluded ivy dependencies
      53423e0 [Burak Yavuz] tests added
      3705907 [Burak Yavuz] remove ivy-repo as a command line argument. Use global ivy cache as default
      c04d885 [Burak Yavuz] take path to ivy cache as a conf
      2edc9b5 [Burak Yavuz] managed to exclude Spark and it's dependencies
      a0870af [Burak Yavuz] add docs. remove unnecesary new lines
      6645af4 [Burak Yavuz] [SPARK-5341] added base implementation
      882c4c8 [Burak Yavuz] added maven dependency download
      6aed719e
    • Davies Liu's avatar
      [SPARK-4939] revive offers periodically in LocalBackend · 83de71c4
      Davies Liu authored
      The locality timeout assume that the SchedulerBackend can revive offers periodically, but currently LocalBackend did do that, then some job with mixed locality levels in local mode will hang forever.
      
      This PR let LocalBackend revive offers periodically, just like in cluster mode.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #4147 from davies/revive and squashes the following commits:
      
      2acdf9d [Davies Liu] Update LocalBackend.scala
      3c8ca7c [Davies Liu] Update LocalBackend.scala
      d1b60d2 [Davies Liu] address comments from Kay
      33ac9bb [Davies Liu] fix build
      d0da0d5 [Davies Liu] Merge branch 'master' of github.com:apache/spark into revive
      6cf5972 [Davies Liu] fix thread-safety
      ed62a31 [Davies Liu] fix scala style
      df9008b [Davies Liu] fix typo
      bfc1396 [Davies Liu] revive offers periodically in LocalBackend
      83de71c4
    • freeman's avatar
      [SPARK-4969][STREAMING][PYTHON] Add binaryRecords to streaming · 242b4f02
      freeman authored
      In Spark 1.2 we added a `binaryRecords` input method for loading flat binary data. This format is useful for numerical array data, e.g. in scientific computing applications. This PR adds support for the same format in Streaming applications, where it is similarly useful, especially for streaming time series or sensor data.
      
      Summary of additions
      - adding `binaryRecordsStream` to Spark Streaming
      - exposing `binaryRecordsStream` in the new PySpark Streaming
      - new unit tests in Scala and Python
      
      This required adding an optional Hadoop configuration param to `fileStream` and `FileInputStream`, but was otherwise straightforward.
      
      tdas davies
      
      Author: freeman <the.freeman.lab@gmail.com>
      
      Closes #3803 from freeman-lab/streaming-binary-records and squashes the following commits:
      
      b676534 [freeman] Clarify note
      5ff1b75 [freeman] Add note to java streaming context
      eba925c [freeman] Simplify notes
      c4237b8 [freeman] Add experimental tag
      30eba67 [freeman] Add filter and newFilesOnly alongside conf
      c2cfa6d [freeman] Expose new version of fileStream with conf in java
      34d20ef [freeman] Add experimental tag
      14bca9a [freeman] Add experimental tag
      b85bffc [freeman] Formatting
      47560f4 [freeman] Space formatting
      9a3715a [freeman] Refactor to reflect changes to FileInputSuite
      7373f73 [freeman] Add note and defensive assertion for byte length
      3ceb684 [freeman] Merge remote-tracking branch 'upstream/master' into streaming-binary-records
      317b6d1 [freeman] Make test inline
      fcb915c [freeman] Formatting
      becb344 [freeman] Formatting
      d3e75b2 [freeman] Add tests in python
      a4324a3 [freeman] Line length
      029d49c [freeman] Formatting
      1c739aa [freeman] Simpler default arg handling
      94d90d0 [freeman] Spelling
      2843e9d [freeman] Add params to docstring
      8b70fbc [freeman] Reorganization
      28bff9b [freeman] Fix missing arg
      9398bcb [freeman] Expose optional hadoop configuration
      23dd69f [freeman] Tests for binaryRecordsStream
      36cb0fd [freeman] Add binaryRecordsStream to scala
      fe4e803 [freeman] Add binaryRecordStream to Java API
      ecef0eb [freeman] Add binaryRecordsStream to python
      8550c26 [freeman] Expose additional argument combination
      242b4f02
    • Reynold Xin's avatar
      [SPARK-5579][SQL][DataFrame] Support for project/filter using SQL expressions · 40c4cb2f
      Reynold Xin authored
      ```scala
      df.selectExpr("abs(colA)", "colB")
      df.filter("age > 21")
      ```
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4348 from rxin/SPARK-5579 and squashes the following commits:
      
      2baeef2 [Reynold Xin] Fix Python.
      b416372 [Reynold Xin] [SPARK-5579][SQL][DataFrame] Support for project/filter using SQL expressions.
      40c4cb2f
  3. Feb 03, 2015
    • Xiangrui Meng's avatar
      [FIX][MLLIB] fix seed handling in Python GMM · eb156318
      Xiangrui Meng authored
      If `seed` is `None` on the python side, it will pass in as a `null`. So we should use `java.lang.Long` instead of `Long` to take it.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #4349 from mengxr/gmm-fix and squashes the following commits:
      
      3be5926 [Xiangrui Meng] fix seed handling in Python GMM
      eb156318
    • zsxwing's avatar
      [SPARK-4795][Core] Redesign the "primitive type => Writable" implicit APIs to... · d37978d8
      zsxwing authored
      [SPARK-4795][Core] Redesign the "primitive type => Writable" implicit APIs to make them be activated automatically
      
      Try to redesign the "primitive type => Writable" implicit APIs to make them be activated automatically and without breaking binary compatibility.
      
      However, this PR will breaking the source compatibility if people use `xxxToXxxWritable` occasionally. See the unit test in `graphx`.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #3642 from zsxwing/SPARK-4795 and squashes the following commits:
      
      914b2d6 [zsxwing] Add implicit back to the Writables methods
      0b9017f [zsxwing] Add some docs
      a0e8509 [zsxwing] Merge branch 'master' into SPARK-4795
      39343de [zsxwing] Fix the unit test
      64853af [zsxwing] Reorganize the rest 'implicit' methods in SparkContext
      d37978d8
    • Reynold Xin's avatar
      [SPARK-5578][SQL][DataFrame] Provide a convenient way for Scala users to use UDFs · 1077f2e1
      Reynold Xin authored
      A more convenient way to define user-defined functions.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4345 from rxin/defineUDF and squashes the following commits:
      
      639c0f8 [Reynold Xin] udf tests.
      0a0b339 [Reynold Xin] defineUDF -> udf.
      b452b8d [Reynold Xin] Fix UDF registration.
      d2e42c3 [Reynold Xin] SQLContext.udf.register() returns a UserDefinedFunction also.
      4333605 [Reynold Xin] [SQL][DataFrame] defineUDF.
      1077f2e1
Loading