Skip to content
Snippets Groups Projects
  1. Jan 15, 2016
    • Josh Rosen's avatar
      [SPARK-12842][TEST-HADOOP2.7] Add Hadoop 2.7 build profile · 8dbbf3e7
      Josh Rosen authored
      This patch adds a Hadoop 2.7 build profile in order to let us automate tests against that version.
      
      /cc rxin srowen
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #10775 from JoshRosen/add-hadoop-2.7-profile.
      8dbbf3e7
    • Yin Huai's avatar
      [SPARK-12833][HOT-FIX] Reset the locale after we set it. · f6ddbb36
      Yin Huai authored
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #10778 from yhuai/resetLocale.
      f6ddbb36
    • Yanbo Liang's avatar
      [SPARK-11925][ML][PYSPARK] Add PySpark missing methods for ml.feature during Spark 1.6 QA · 5f843781
      Yanbo Liang authored
      Add PySpark missing methods and params for ml.feature:
      * ```RegexTokenizer``` should support setting ```toLowercase```.
      * ```MinMaxScalerModel``` should support output ```originalMin``` and ```originalMax```.
      * ```PCAModel``` should support output ```pc```.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #9908 from yanboliang/spark-11925.
      5f843781
    • Herman van Hovell's avatar
      [SPARK-12575][SQL] Grammar parity with existing SQL parser · 7cd7f220
      Herman van Hovell authored
      In this PR the new CatalystQl parser stack reaches grammar parity with the old Parser-Combinator based SQL Parser. This PR also replaces all uses of the old Parser, and removes it from the code base.
      
      Although the existing Hive and SQL parser dialects were mostly the same, some kinks had to be worked out:
      - The SQL Parser allowed syntax like ```APPROXIMATE(0.01) COUNT(DISTINCT a)```. In order to make this work we needed to hardcode approximate operators in the parser, or we would have to create an approximate expression. ```APPROXIMATE_COUNT_DISTINCT(a, 0.01)``` would also do the job and is much easier to maintain. So, this PR **removes** this keyword.
      - The old SQL Parser supports ```LIMIT``` clauses in nested queries. This is **not supported** anymore. See https://github.com/apache/spark/pull/10689 for the rationale for this.
      - Hive has a charset name char set literal combination it supports, for instance the following expression ```_ISO-8859-1 0x4341464562616265``` would yield this string: ```CAFEbabe```. Hive will only allow charset names to start with an underscore. This is quite annoying in spark because as soon as you use a tuple names will start with an underscore. In this PR we **remove** this feature from the parser. It would be quite easy to implement such a feature as an Expression later on.
      - Hive and the SQL Parser treat decimal literals differently. Hive will turn any decimal into a ```Double``` whereas the SQL Parser would convert a non-scientific decimal into a ```BigDecimal```, and would turn a scientific decimal into a Double. We follow Hive's behavior here. The new parser supports a big decimal literal, for instance: ```81923801.42BD```, which can be used when a big decimal is needed.
      
      cc rxin viirya marmbrus yhuai cloud-fan
      
      Author: Herman van Hovell <hvanhovell@questtec.nl>
      
      Closes #10745 from hvanhovell/SPARK-12575-2.
      7cd7f220
    • Wenchen Fan's avatar
      [SQL][MINOR] BoundReference do not need to be NamedExpression · 3f1c58d6
      Wenchen Fan authored
      We made it a `NamedExpression` to workaroud some hacky cases long time ago, and now seems it's safe to remove it.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #10765 from cloud-fan/minor.
      3f1c58d6
    • Alex Bozarth's avatar
      [SPARK-12716][WEB UI] Add a TOTALS row to the Executors Web UI · 61c45876
      Alex Bozarth authored
      Added a Totals table to the top of the page to display the totals of each applicable column in the executors table.
      
      Old Description:
      ~~Created a TOTALS row containing the totals of each column in the executors UI. By default the TOTALS row appears at the top of the table. When a column is sorted the TOTALS row will always sort to either the top or bottom of the table.~~
      
      Author: Alex Bozarth <ajbozart@us.ibm.com>
      
      Closes #10668 from ajbozarth/spark12716.
      61c45876
    • Julien Baley's avatar
      Fix typo · 0bb73554
      Julien Baley authored
      disvoered => discovered
      
      Author: Julien Baley <julien.baley@gmail.com>
      
      Closes #10773 from julienbaley/patch-1.
      0bb73554
    • Yin Huai's avatar
      [SPARK-12833][HOT-FIX] Fix scala 2.11 compilation. · 513266c0
      Yin Huai authored
      Seems https://github.com/apache/spark/commit/5f83c6991c95616ecbc2878f8860c69b2826f56c breaks scala 2.11 compilation.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #10774 from yhuai/fixScala211Compile.
      513266c0
    • Reynold Xin's avatar
      [SPARK-12667] Remove block manager's internal "external block store" API · ad1503f9
      Reynold Xin authored
      This pull request removes the external block store API. This is rarely used, and the file system interface is actually a better, more standard way to interact with external storage systems.
      
      There are some other things to remove also, as pointed out by JoshRosen. We will do those as follow-up pull requests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #10752 from rxin/remove-offheap.
      ad1503f9
    • Hossein's avatar
      [SPARK-12833][SQL] Initial import of spark-csv · 5f83c699
      Hossein authored
      CSV is the most common data format in the "small data" world. It is often the first format people want to try when they see Spark on a single node. Having to rely on a 3rd party component for this leads to poor user experience for new users. This PR merges the popular spark-csv data source package (https://github.com/databricks/spark-csv) with SparkSQL.
      
      This is a first PR to bring the functionality to spark 2.0 master. We will complete items outlines in the design document (see JIRA attachment) in follow up pull requests.
      
      Author: Hossein <hossein@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #10766 from rxin/csv.
      5f83c699
    • Davies Liu's avatar
      [MINOR] [SQL] GeneratedExpressionCode -> ExprCode · c5e7076d
      Davies Liu authored
      GeneratedExpressionCode is too long
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #10767 from davies/renaming.
      c5e7076d
    • Oscar D. Lara Yejas's avatar
      [SPARK-11031][SPARKR] Method str() on a DataFrame · ba4a6419
      Oscar D. Lara Yejas authored
      Author: Oscar D. Lara Yejas <odlaraye@oscars-mbp.usca.ibm.com>
      Author: Oscar D. Lara Yejas <olarayej@mail.usf.edu>
      Author: Oscar D. Lara Yejas <oscar.lara.yejas@us.ibm.com>
      Author: Oscar D. Lara Yejas <odlaraye@oscars-mbp.attlocal.net>
      
      Closes #9613 from olarayej/SPARK-11031.
      ba4a6419
    • Tom Graves's avatar
      [SPARK-2930] clarify docs on using webhdfs with spark.yarn.access.nam… · 96fb894d
      Tom Graves authored
      …enodes
      
      Author: Tom Graves <tgraves@yahoo-inc.com>
      
      Closes #10699 from tgravescs/SPARK-2930.
      96fb894d
    • Jason Lee's avatar
      [SPARK-12655][GRAPHX] GraphX does not unpersist RDDs · d0a5c32b
      Jason Lee authored
      Some VertexRDD and EdgeRDD are created during the intermediate step of g.connectedComponents() but unnecessarily left cached after the method is done. The fix is to unpersist these RDDs once they are no longer in use.
      
      A test case is added to confirm the fix for the reported bug.
      
      Author: Jason Lee <cjlee@us.ibm.com>
      
      Closes #10713 from jasoncl/SPARK-12655.
      d0a5c32b
    • Reynold Xin's avatar
      [SPARK-12830] Java style: disallow trailing whitespaces. · fe7246fe
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #10764 from rxin/SPARK-12830.
      fe7246fe
  2. Jan 14, 2016
    • Reynold Xin's avatar
      [SPARK-12829] Turn Java style checker on · 591c88c9
      Reynold Xin authored
      It was previously turned off because there was a problem with a pull request. We should turn it on now.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #10763 from rxin/SPARK-12829.
      591c88c9
    • Koyo Yoshida's avatar
      [SPARK-12708][UI] Sorting task error in Stages Page when yarn mode. · 32cca933
      Koyo Yoshida authored
      If sort column contains slash(e.g. "Executor ID / Host") when yarn mode,sort fail with following message.
      
      ![spark-12708](https://cloud.githubusercontent.com/assets/6679275/12193320/80814f8c-b62a-11e5-9914-7bf3907029df.png)
      
      It's similar to SPARK-4313 .
      
      Author: root <root@R520T1.(none)>
      Author: Koyo Yoshida <koyo0615@gmail.com>
      
      Closes #10663 from yoshidakuy/SPARK-12708.
      32cca933
    • Michael Armbrust's avatar
      [SPARK-12813][SQL] Eliminate serialization for back to back operations · cc7af86a
      Michael Armbrust authored
      The goal of this PR is to eliminate unnecessary translations when there are back-to-back `MapPartitions` operations.  In order to achieve this I also made the following simplifications:
      
       - Operators no longer have hold encoders, instead they have only the expressions that they need.  The benefits here are twofold: the expressions are visible to transformations so go through the normal resolution/binding process.  now that they are visible we can change them on a case by case basis.
       - Operators no longer have type parameters.  Since the engine is responsible for its own type checking, having the types visible to the complier was an unnecessary complication.  We still leverage the scala compiler in the companion factory when constructing a new operator, but after this the types are discarded.
      
      Deferred to a follow up PR:
       - Remove as much of the resolution/binding from Dataset/GroupedDataset as possible. We should still eagerly check resolution and throw an error though in the case of mismatches for an `as` operation.
       - Eliminate serializations in more cases by adding more cases to `EliminateSerialization`
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #10747 from marmbrus/encoderExpressions.
      cc7af86a
    • Josh Rosen's avatar
      [SPARK-12174] Speed up BlockManagerSuite getRemoteBytes() test · 25782981
      Josh Rosen authored
      This patch significantly speeds up the BlockManagerSuite's "SPARK-9591: getRemoteBytes from another location when Exception throw" test, reducing the test time from 45s to ~250ms. The key change was to set `spark.shuffle.io.maxRetries` to 0 (the code previously set `spark.network.timeout` to `2s`, but this didn't make a difference because the slowdown was not due to this timeout).
      
      Along the way, I also cleaned up the way that we handle SparkConf in BlockManagerSuite: previously, each test would mutate a shared SparkConf instance, while now each test gets a fresh SparkConf.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #10759 from JoshRosen/SPARK-12174.
      25782981
    • Kousuke Saruta's avatar
      [SPARK-12821][BUILD] Style checker should run when some configuration files... · bcc7373f
      Kousuke Saruta authored
      [SPARK-12821][BUILD] Style checker should run when some configuration files for style are modified but any source files are not.
      
      When running the `run-tests` script, style checkers run only when any source files are modified but they should run when configuration files related to style are modified.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #10754 from sarutak/SPARK-12821.
      bcc7373f
    • Reynold Xin's avatar
      [SPARK-12771][SQL] Simplify CaseWhen code generation · 902667fd
      Reynold Xin authored
      The generated code for CaseWhen uses a control variable "got" to make sure we do not evaluate more branches once a branch is true. Changing that to generate just simple "if / else" would be slightly more efficient.
      
      This closes #10737.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #10755 from rxin/SPARK-12771.
      902667fd
    • Shixiong Zhu's avatar
      [SPARK-12784][UI] Fix Spark UI IndexOutOfBoundsException with dynamic allocation · 501e99ef
      Shixiong Zhu authored
      Add `listener.synchronized` to get `storageStatusList` and `execInfo` atomically.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #10728 from zsxwing/SPARK-12784.
      501e99ef
    • Bryan Cutler's avatar
      [SPARK-9844][CORE] File appender race condition during shutdown · 56cdbd65
      Bryan Cutler authored
      When an Executor process is destroyed, the FileAppender that is asynchronously reading the stderr stream of the process can throw an IOException during read because the stream is closed.  Before the ExecutorRunner destroys the process, the FileAppender thread is flagged to stop.  This PR wraps the inputStream.read call of the FileAppender in a try/catch block so that if an IOException is thrown and the thread has been flagged to stop, it will safely ignore the exception.  Additionally, the FileAppender thread was changed to use Utils.tryWithSafeFinally to better log any exception that do occur.  Added unit tests to verify a IOException is thrown and logged if FileAppender is not flagged to stop, and that no IOException when the flag is set.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #10714 from BryanCutler/file-appender-read-ioexception-SPARK-9844.
      56cdbd65
    • Jeff Zhang's avatar
      [SPARK-12707][SPARK SUBMIT] Remove submit python/R scripts through py… · 8f13cd4c
      Jeff Zhang authored
      …spark/sparkR
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #10658 from zjffdu/SPARK-12707.
      8f13cd4c
    • Wenchen Fan's avatar
      [SPARK-12756][SQL] use hash expression in Exchange · 962e9bcf
      Wenchen Fan authored
      This PR makes bucketing and exchange share one common hash algorithm, so that we can guarantee the data distribution is same between shuffle and bucketed data source, which enables us to only shuffle one side when join a bucketed table and a normal one.
      
      This PR also fixes the tests that are broken by the new hash behaviour in shuffle.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #10703 from cloud-fan/use-hash-expr-in-shuffle.
      962e9bcf
  3. Jan 13, 2016
    • Josh Rosen's avatar
      [SPARK-12819] Deprecate TaskContext.isRunningLocally() · e2ae7bd0
      Josh Rosen authored
      We've already removed local execution but didn't deprecate `TaskContext.isRunningLocally()`; we should deprecate it for 2.0.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #10751 from JoshRosen/remove-local-exec-from-taskcontext.
      e2ae7bd0
    • Joseph K. Bradley's avatar
      [SPARK-12703][MLLIB][DOC][PYTHON] Fixed pyspark.mllib.clustering.KMeans user guide example · 20d8ef85
      Joseph K. Bradley authored
      Fixed WSSSE computeCost in Python mllib KMeans user guide example by using new computeCost method API in Python.
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #10707 from jkbradley/kmeans-doc-fix.
      20d8ef85
    • Yuhao Yang's avatar
      [SPARK-12026][MLLIB] ChiSqTest gets slower and slower over time when number of features is large · 021dafc6
      Yuhao Yang authored
      jira: https://issues.apache.org/jira/browse/SPARK-12026
      
      The issue is valid as features.toArray.view.zipWithIndex.slice(startCol, endCol) becomes slower as startCol gets larger.
      
      I tested on local and the change can improve the performance and the running time was stable.
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #10146 from hhbyyh/chiSq.
      021dafc6
    • jerryshao's avatar
      [SPARK-12400][SHUFFLE] Avoid generating temp shuffle files for empty partitions · cd81fc9e
      jerryshao authored
      This problem lies in `BypassMergeSortShuffleWriter`, empty partition will also generate a temp shuffle file with several bytes. So here change to only create file when partition is not empty.
      
      This problem only lies in here, no such issue in `HashShuffleWriter`.
      
      Please help to review, thanks a lot.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #10376 from jerryshao/SPARK-12400.
      cd81fc9e
    • Carson Wang's avatar
      [SPARK-12690][CORE] Fix NPE in UnsafeInMemorySorter.free() · eabc7b8e
      Carson Wang authored
      I hit the exception below. The `UnsafeKVExternalSorter` does pass `null` as the consumer when creating an `UnsafeInMemorySorter`. Normally the NPE doesn't occur because the `inMemSorter` is set to null later and the `free()` method is not called. It happens when there is another exception like OOM thrown before setting `inMemSorter` to null. Anyway, we can add the null check to avoid it.
      
      ```
      ERROR spark.TaskContextImpl: Error in TaskCompletionListener
      java.lang.NullPointerException
              at org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.free(UnsafeInMemorySorter.java:110)
              at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.cleanupResources(UnsafeExternalSorter.java:288)
              at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter$1.onTaskCompletion(UnsafeExternalSorter.java:141)
              at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)
              at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)
              at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
              at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
              at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)
              at org.apache.spark.scheduler.Task.run(Task.scala:91)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
              at java.lang.Thread.run(Thread.java:722)
      ```
      
      Author: Carson Wang <carson.wang@intel.com>
      
      Closes #10637 from carsonwang/FixNPE.
      eabc7b8e
    • Reynold Xin's avatar
      [SPARK-12791][SQL] Simplify CaseWhen by breaking "branches" into "conditions" and "values" · cbbcd8e4
      Reynold Xin authored
      This pull request rewrites CaseWhen expression to break the single, monolithic "branches" field into a sequence of tuples (Seq[(condition, value)]) and an explicit optional elseValue field.
      
      Prior to this pull request, each even position in "branches" represents the condition for each branch, and each odd position represents the value for each branch. The use of them have been pretty confusing with a lot sliding windows or grouped(2) calls.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #10734 from rxin/simplify-case.
      cbbcd8e4
    • Wenchen Fan's avatar
      [SPARK-12642][SQL] improve the hash expression to be decoupled from unsafe row · c2ea79f9
      Wenchen Fan authored
      https://issues.apache.org/jira/browse/SPARK-12642
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #10694 from cloud-fan/hash-expr.
      c2ea79f9
    • Erik Selin's avatar
      [SPARK-12268][PYSPARK] Make pyspark shell pythonstartup work under python3 · e4e0b3f7
      Erik Selin authored
      This replaces the `execfile` used for running custom python shell scripts
      with explicit open, compile and exec (as recommended by 2to3). The reason
      for this change is to make the pythonstartup option compatible with python3.
      
      Author: Erik Selin <erik.selin@gmail.com>
      
      Closes #10255 from tyro89/pythonstartup-python3.
      e4e0b3f7
    • Josh Rosen's avatar
      [SPARK-9383][PROJECT-INFRA] PR merge script should reset back to previous branch when possible · 97e0c7c5
      Josh Rosen authored
      This patch modifies our PR merge script to reset back to a named branch when restoring the original checkout upon exit. When the committer is originally checked out to a detached head, then they will be restored back to that same ref (the same as today's behavior).
      
      This is a slightly updated version of #7569, with an extra fix to handle the detached head corner-case.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #10709 from JoshRosen/SPARK-9383.
      97e0c7c5
    • Jakob Odersky's avatar
      [SPARK-12761][CORE] Remove duplicated code · 38148f73
      Jakob Odersky authored
      Removes some duplicated code that was reintroduced during a merge.
      
      Author: Jakob Odersky <jodersky@gmail.com>
      
      Closes #10711 from jodersky/repl-2.11-duplicate.
      38148f73
    • Luc Bourlier's avatar
      [SPARK-12805][MESOS] Fixes documentation on Mesos run modes · cc91e218
      Luc Bourlier authored
      The default run has changed, but the documentation didn't fully reflect the change.
      
      Author: Luc Bourlier <luc.bourlier@typesafe.com>
      
      Closes #10740 from skyluc/issue/mesos-modes-doc.
      cc91e218
    • Liang-Chi Hsieh's avatar
      [SPARK-9297] [SQL] Add covar_pop and covar_samp · 63eee86c
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-9297
      
      Add two aggregation functions: covar_pop and covar_samp.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      Author: Liang-Chi Hsieh <viirya@appier.com>
      
      Closes #10029 from viirya/covar-funcs.
      63eee86c
    • Yin Huai's avatar
      [SPARK-12692][BUILD][HOT-FIX] Fix the scala style of KinesisBackedBlockRDDSuite.scala. · d6fd9b37
      Yin Huai authored
      https://github.com/apache/spark/pull/10736 was merged yesterday and caused the master start to fail because of the style issue.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #10742 from yhuai/fixStyle.
      d6fd9b37
    • Kousuke Saruta's avatar
      [SPARK-12692][BUILD] Enforce style checking about white space before comma · 3d81d63f
      Kousuke Saruta authored
      This is the final PR about SPARK-12692.
      We have removed all of white spaces before comma from code so let's enforce style checking.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #10736 from sarutak/SPARK-12692-followup-enforce-checking.
      3d81d63f
    • Kousuke Saruta's avatar
      [SPARK-12692][BUILD][SQL] Scala style: Fix the style violation (Space before ",") · cb7b864a
      Kousuke Saruta authored
      Fix the style violation (space before , and :).
      This PR is a followup for #10643 and rework of #10685 .
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #10732 from sarutak/SPARK-12692-followup-sql.
      cb7b864a
Loading