Skip to content
Snippets Groups Projects
  1. Oct 29, 2015
  2. Oct 28, 2015
  3. Oct 27, 2015
    • Cheng Hao's avatar
      [SPARK-10484] [SQL] Optimize the cartesian join with broadcast join for some cases · d9c60398
      Cheng Hao authored
      In some cases, we can broadcast the smaller relation in cartesian join, which improve the performance significantly.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #8652 from chenghao-intel/cartesian.
      d9c60398
    • Kay Ousterhout's avatar
      [SPARK-11178] Improving naming around task failures. · b960a890
      Kay Ousterhout authored
      Commit af3bc59d introduced new
      functionality so that if an executor dies for a reason that's not
      caused by one of the tasks running on the executor (e.g., due to
      pre-emption), Spark doesn't count the failure towards the maximum
      number of failures for the task.  That commit introduced some vague
      naming that this commit attempts to fix; in particular:
      
      (1) The variable "isNormalExit", which was used to refer to cases where
      the executor died for a reason unrelated to the tasks running on the
      machine, has been renamed (and reversed) to "exitCausedByApp". The problem
      with the existing name is that it's not clear (at least to me!) what it
      means for an exit to be "normal"; the new name is intended to make the
      purpose of this variable more clear.
      
      (2) The variable "shouldEventuallyFailJob" has been renamed to
      "countTowardsTaskFailures". This variable is used to determine whether
      a task's failure should be counted towards the maximum number of failures
      allowed for a task before the associated Stage is aborted. The problem
      with the existing name is that it can be confused with implying that
      the task's failure should immediately cause the stage to fail because it
      is somehow fatal (this is the case for a fetch failure, for example: if
      a task fails because of a fetch failure, there's no point in retrying,
      and the whole stage should be failed).
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #9164 from kayousterhout/SPARK-11178.
      b960a890
    • zsxwing's avatar
      [SPARK-11212][CORE][STREAMING] Make preferred locations support... · 9fbd75ab
      zsxwing authored
      [SPARK-11212][CORE][STREAMING] Make preferred locations support ExecutorCacheTaskLocation and update…
      
      … ReceiverTracker and ReceiverSchedulingPolicy to use it
      
      This PR includes the following changes:
      
      1. Add a new preferred location format, `executor_<host>_<executorID>` (e.g., "executor_localhost_2"), to support specifying the executor locations for RDD.
      2. Use the new preferred location format in `ReceiverTracker` to optimize the starting time of Receivers when there are multiple executors in a host.
      
      The goal of this PR is to enable the streaming scheduler to place receivers (which run as tasks) in specific executors. Basically, I want to have more control on the placement of the receivers such that they are evenly distributed among the executors. We tried to do this without changing the core scheduling logic. But it does not allow specifying particular executor as preferred location, only at the host level. So if there are two executors in the same host, and I want two receivers to run on them (one on each executor), I cannot specify that. Current code only specifies the host as preference, which may end up launching both receivers on the same executor. We try to work around it but restarting a receiver when it does not launch in the desired executor and hope that next time it will be started in the right one. But that cause lots of restarts, and delays in correctly launching the receiver.
      
      So this change, would allow the streaming scheduler to specify the exact executor as the preferred location. Also this is not exposed to the user, only the streaming scheduler uses this.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #9181 from zsxwing/executor-location.
      9fbd75ab
    • Burak Yavuz's avatar
      [SPARK-11324][STREAMING] Flag for closing Write Ahead Logs after a write · 4f030b9e
      Burak Yavuz authored
      Currently the Write Ahead Log in Spark Streaming flushes data as writes need to be made. S3 does not support flushing of data, data is written once the stream is actually closed.
      In case of failure, the data for the last minute (default rolling interval) will not be properly written. Therefore we need a flag to close the stream after the write, so that we achieve read after write consistency.
      
      cc tdas zsxwing
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #9285 from brkyvz/caw-wal.
      4f030b9e
    • vectorijk's avatar
      [SPARK-10024][PYSPARK] Python API RF and GBT related params clear up · 9dba5fb2
      vectorijk authored
      implement {RandomForest, GBT, TreeEnsemble, TreeClassifier, TreeRegressor}Params for Python API
      in pyspark/ml/{classification, regression}.py
      
      Author: vectorijk <jiangkai@gmail.com>
      
      Closes #9233 from vectorijk/spark-10024.
      9dba5fb2
    • Michael Armbrust's avatar
      [SPARK-11347] [SQL] Support for joinWith in Datasets · 5a5f6590
      Michael Armbrust authored
      This PR adds a new operation `joinWith` to a `Dataset`, which returns a `Tuple` for each pair where a given `condition` evaluates to true.
      
      ```scala
      case class ClassData(a: String, b: Int)
      
      val ds1 = Seq(ClassData("a", 1), ClassData("b", 2)).toDS()
      val ds2 = Seq(("a", 1), ("b", 2)).toDS()
      
      > ds1.joinWith(ds2, $"_1" === $"a").collect()
      res0: Array((ClassData("a", 1), ("a", 1)), (ClassData("b", 2), ("b", 2)))
      ```
      
      This operation is similar to the relation `join` function with one important difference in the result schema. Since `joinWith` preserves objects present on either side of the join, the result schema is similarly nested into a tuple under the column names `_1` and `_2`.
      
      This type of join can be useful both for preserving type-safety with the original object types as well as working with relational data where either side of the join has column names in common.
      
      ## Required Changes to Encoders
      In the process of working on this patch, several deficiencies to the way that we were handling encoders were discovered.  Specifically, it turned out to be very difficult to `rebind` the non-expression based encoders to extract the nested objects from the results of joins (and also typed selects that return tuples).
      
      As a result the following changes were made.
       - `ClassEncoder` has been renamed to `ExpressionEncoder` and has been improved to also handle primitive types.  Additionally, it is now possible to take arbitrary expression encoders and rewrite them into a single encoder that returns a tuple.
       - All internal operations on `Dataset`s now require an `ExpressionEncoder`.  If the users tries to pass a non-`ExpressionEncoder` in, an error will be thrown.  We can relax this requirement in the future by constructing a wrapper class that uses expressions to project the row to the expected schema, shielding the users code from the required remapping.  This will give us a nice balance where we don't force user encoders to understand attribute references and binding, but still allow our native encoder to leverage runtime code generation to construct specific encoders for a given schema that avoid an extra remapping step.
       - Additionally, the semantics for different types of objects are now better defined.  As stated in the `ExpressionEncoder` scaladoc:
        - Classes will have their sub fields extracted by name using `UnresolvedAttribute` expressions
        and `UnresolvedExtractValue` expressions.
        - Tuples will have their subfields extracted by position using `BoundReference` expressions.
        - Primitives will have their values extracted from the first ordinal with a schema that defaults
        to the name `value`.
       - Finally, the binding lifecycle for `Encoders` has now been unified across the codebase.  Encoders are now `resolved` to the appropriate schema in the constructor of `Dataset`.  This process replaces an unresolved expressions with concrete `AttributeReference` expressions.  Binding then happens on demand, when an encoder is going to be used to construct an object.  This closely mirrors the lifecycle for standard expressions when executing normal SQL or `DataFrame` queries.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #9300 from marmbrus/datasets-tuples.
      5a5f6590
    • Mike Dusenberry's avatar
      [SPARK-6488][MLLIB][PYTHON] Support addition/multiplication in PySpark's BlockMatrix · 3bdbbc6c
      Mike Dusenberry authored
      This PR adds addition and multiplication to PySpark's `BlockMatrix` class via `add` and `multiply` functions.
      
      Author: Mike Dusenberry <mwdusenb@us.ibm.com>
      
      Closes #9139 from dusenberrymw/SPARK-6488_Add_Addition_and_Multiplication_to_PySpark_BlockMatrix.
      3bdbbc6c
    • Kay Ousterhout's avatar
      [SPARK-11306] Fix hang when JVM exits. · 9fc16a82
      Kay Ousterhout authored
      This commit fixes a bug where, in Standalone mode, if a task fails and crashes the JVM, the
      failure is considered a "normal failure" (meaning it's considered unrelated to the task), so
      the failure isn't counted against the task's maximum number of failures:
      https://github.com/apache/spark/commit/af3bc59d1f5d9d952c2d7ad1af599c49f1dbdaf0#diff-a755f3d892ff2506a7aa7db52022d77cL138.
      As a result, if a task fails in a way that results in it crashing the JVM, it will continuously be
      re-launched, resulting in a hang. This commit fixes that problem.
      
      This bug was introduced by #8007; andrewor14 mccheah vanzin can you take a look at this?
      
      This error is hard to trigger because we handle executor losses through 2 code paths (the second is via Akka, where Akka notices that the executor endpoint is disconnected).  In my setup, the Akka code path completes first, and doesn't have this bug, so things work fine (see my recent email to the dev list about this).  If I manually disable the Akka code path, I can see the hang (and this commit fixes the issue).
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #9273 from kayousterhout/SPARK-11306.
      9fc16a82
    • Yanbo Liang's avatar
      [SPARK-11303][SQL] filter should not be pushed down into sample · 360ed832
      Yanbo Liang authored
      When sampling and then filtering DataFrame, the SQL Optimizer will push down filter into sample and produce wrong result. This is due to the sampler is calculated based on the original scope rather than the scope after filtering.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #9294 from yanboliang/spark-11303.
      360ed832
    • Jia Li's avatar
      [SPARK-11277][SQL] sort_array throws exception scala.MatchError · 958a0ec8
      Jia Li authored
      I'm new to spark. I was trying out the sort_array function then hit this exception. I looked into the spark source code. I found the root cause is that sort_array does not check for an array of NULLs. It's not meaningful to sort an array of entirely NULLs anyway.
      
      I'm adding a check on the input array type to SortArray. If the array consists of NULLs entirely, there is no need to sort such array. I have also added a test case for this.
      
      Please help to review my fix. Thanks!
      
      Author: Jia Li <jiali@us.ibm.com>
      
      Closes #9247 from jliwork/SPARK-11277.
      958a0ec8
    • maxwell's avatar
      [SPARK-5569][STREAMING] fix ObjectInputStreamWithLoader for supporting load array classes. · 17f49992
      maxwell authored
      When use Kafka DirectStream API to create checkpoint and restore saved checkpoint when restart,
      ClassNotFound exception would occur.
      
      The reason for this error is that ObjectInputStreamWithLoader extends the ObjectInputStream class and override its resolveClass method. But Instead of Using Class.forName(desc,false,loader), Spark uses loader.loadClass(desc) to instance the class, which do not works with array class.
      
      For example:
      Class.forName("[Lorg.apache.spark.streaming.kafka.OffsetRange.",false,loader) works well while loader.loadClass("[Lorg.apache.spark.streaming.kafka.OffsetRange") would throw an class not found exception.
      
      details of the difference between Class.forName and loader.loadClass can be found here.
      http://bugs.java.com/view_bug.do?bug_id=6446627
      
      Author: maxwell <maxwellzdm@gmail.com>
      Author: DEMING ZHU <deming.zhu@linecorp.com>
      
      Closes #8955 from maxwellzdm/master.
      17f49992
    • Nick Evans's avatar
      [SPARK-11270][STREAMING] Add improved equality testing for TopicAndPartition... · 8f888eea
      Nick Evans authored
      [SPARK-11270][STREAMING] Add improved equality testing for TopicAndPartition from the Kafka Streaming API
      
      jerryshao tdas
      
      I know this is kind of minor, and I know you all are busy, but this brings this class in line with the `OffsetRange` class, and makes tests a little more concise.
      
      Instead of doing something like:
      ```
      assert topic_and_partition_instance._topic == "foo"
      assert topic_and_partition_instance._partition == 0
      ```
      
      You can do something like:
      ```
      assert topic_and_partition_instance == TopicAndPartition("foo", 0)
      ```
      
      Before:
      ```
      >>> from pyspark.streaming.kafka import TopicAndPartition
      >>> TopicAndPartition("foo", 0) == TopicAndPartition("foo", 0)
      False
      ```
      
      After:
      ```
      >>> from pyspark.streaming.kafka import TopicAndPartition
      >>> TopicAndPartition("foo", 0) == TopicAndPartition("foo", 0)
      True
      ```
      
      I couldn't find any tests - am I missing something?
      
      Author: Nick Evans <me@nicolasevans.org>
      
      Closes #9236 from manygrams/topic_and_partition_equality.
      8f888eea
    • Sem Mulder's avatar
      [SPARK-11276][CORE] SizeEstimator prevents class unloading · feb8d6a4
      Sem Mulder authored
      The SizeEstimator keeps a cache of ClassInfos but this cache uses Class objects as keys.
      Which results in strong references to the Class objects. If these classes are dynamically created
      this prevents the corresponding ClassLoader from being GCed. Leading to PermGen exhaustion.
      
      We use a Map with WeakKeys to prevent this issue.
      
      Author: Sem Mulder <sem.mulder@site2mobile.com>
      
      Closes #9244 from SemMulder/fix-sizeestimator-classunloading.
      feb8d6a4
    • Xusen Yin's avatar
      [SPARK-11297] Add new code tags · d77d198f
      Xusen Yin authored
      mengxr https://issues.apache.org/jira/browse/SPARK-11297
      
      Add new code tags to hold the same look and feel with previous documents.
      
      Author: Xusen Yin <yinxusen@gmail.com>
      
      Closes #9265 from yinxusen/SPARK-11297.
      d77d198f
    • Reza Zadeh's avatar
      [SPARK-10654][MLLIB] Add columnSimilarities to IndexedRowMatrix · 8b292b19
      Reza Zadeh authored
      Add columnSimilarities to IndexedRowMatrix by delegating to functionality already in RowMatrix.
      
      With a test.
      
      Author: Reza Zadeh <reza@databricks.com>
      
      Closes #8792 from rezazadeh/colsims.
      8b292b19
  4. Oct 26, 2015
Loading