Skip to content
Snippets Groups Projects
  1. Jul 01, 2014
  2. Jun 30, 2014
    • Sean Owen's avatar
      SPARK-2293. Replace RDD.zip usage by map with predict inside. · 04fa1223
      Sean Owen authored
      This is the only occurrence of this pattern in the examples that needs to be replaced. It only addresses the example change.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #1250 from srowen/SPARK-2293 and squashes the following commits:
      
      6b1b28c [Sean Owen] Compute prediction-and-label RDD directly rather than by zipping, for efficiency
      04fa1223
    • Reynold Xin's avatar
      [SPARK-2318] When exiting on a signal, print the signal name first. · 5fccb567
      Reynold Xin authored
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1260 from rxin/signalhandler1 and squashes the following commits:
      
      8e73552 [Reynold Xin] Uh add Logging back in ApplicationMaster.
      0402ba8 [Reynold Xin] Synchronize SignalLogger.register.
      dc70705 [Reynold Xin] Added SignalLogger to YARN ApplicationMaster.
      79a21b4 [Reynold Xin] Added license header.
      0da052c [Reynold Xin] Added the SignalLogger itself.
      e587d2e [Reynold Xin] [SPARK-2318] When exiting on a signal, print the signal name first.
      5fccb567
    • Reynold Xin's avatar
      [SPARK-2322] Exception in resultHandler should NOT crash DAGScheduler and shutdown SparkContext. · 358ae153
      Reynold Xin authored
      This should go into 1.0.1.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1264 from rxin/SPARK-2322 and squashes the following commits:
      
      c77c07f [Reynold Xin] Added comment to SparkDriverExecutionException and a test case for accumulator.
      5d8d920 [Reynold Xin] [SPARK-2322] Exception in resultHandler could crash DAGScheduler and shutdown SparkContext.
      358ae153
    • Andrew Ash's avatar
      SPARK-2077 Log serializer that actually ends up being used · 68036422
      Andrew Ash authored
      I could settle with this being a debug also if we provided an example of how to turn it on in `log4j.properties`
      
      https://issues.apache.org/jira/browse/SPARK-2077
      
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #1017 from ash211/SPARK-2077 and squashes the following commits:
      
      580f680 [Andrew Ash] Drop to debug
      0266415 [Andrew Ash] SPARK-2077 Log serializer that actually ends up being used
      68036422
    • William Benton's avatar
      SPARK-897: preemptively serialize closures · a484030d
      William Benton authored
      These commits cause `ClosureCleaner.clean` to attempt to serialize the cleaned closure with the default closure serializer and throw a `SparkException` if doing so fails.  This behavior is enabled by default but can be disabled at individual callsites of `SparkContext.clean`.
      
      Commit 98e01ae8 fixes some no-op assertions in `GraphSuite` that this work exposed; I'm happy to put that in a separate PR if that would be more appropriate.
      
      Author: William Benton <willb@redhat.com>
      
      Closes #143 from willb/spark-897 and squashes the following commits:
      
      bceab8a [William Benton] Commented DStream corner cases for serializability checking.
      64d04d2 [William Benton] FailureSuite now checks both messages and causes.
      3b3f74a [William Benton] Stylistic and doc cleanups.
      b215dea [William Benton] Fixed spurious failures in ImplicitOrderingSuite
      be1ecd6 [William Benton] Don't check serializability of DStream transforms.
      abe816b [William Benton] Make proactive serializability checking optional.
      5bfff24 [William Benton] Adds proactive closure-serializablilty checking
      ed2ccf0 [William Benton] Test cases for SPARK-897.
      a484030d
    • jerryshao's avatar
      [SPARK-2104] Fix task serializing issues when sort with Java non serializable class · 66135a34
      jerryshao authored
      Details can be see in [SPARK-2104](https://issues.apache.org/jira/browse/SPARK-2104). This work is based on Reynold's work, add some unit tests to validate the issue.
      
      @rxin , would you please take a look at this PR, thanks a lot.
      
      Author: jerryshao <saisai.shao@intel.com>
      
      Closes #1245 from jerryshao/SPARK-2104 and squashes the following commits:
      
      c8ee362 [jerryshao] Make field partitions transient
      2b41917 [jerryshao] Minor changes
      47d763c [jerryshao] Fix task serializing issue when sort with Java non serializable class
      66135a34
    • Kay Ousterhout's avatar
      [SPARK-1683] Track task read metrics. · 7b71a0e0
      Kay Ousterhout authored
      This commit adds a new metric in TaskMetrics to record
      the input data size and displays this information in the UI.
      
      An earlier version of this commit also added the read time,
      which can be useful for diagnosing straggler problems,
      but unfortunately that change introduced a significant performance
      regression for jobs that don't do much computation. In order to
      track read time, we'll need to do sampling.
      
      The screenshots below show the UI with the new "Input" field,
      which I added to the stage summary page, the executor summary page,
      and the per-stage page.
      
      ![image](https://cloud.githubusercontent.com/assets/1108612/3167930/2627f92a-eb77-11e3-861c-98ea5bb7a1a2.png)
      
      ![image](https://cloud.githubusercontent.com/assets/1108612/3167936/475a889c-eb77-11e3-9706-f11c48751f17.png)
      
      ![image](https://cloud.githubusercontent.com/assets/1108612/3167948/80ebcf12-eb77-11e3-87ed-349fce6a770c.png)
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #962 from kayousterhout/read_metrics and squashes the following commits:
      
      f13b67d [Kay Ousterhout] Correctly format input bytes on executor page
      8b70cde [Kay Ousterhout] Added comment about potential inaccuracy of bytesRead
      d1016e8 [Kay Ousterhout] Udated SparkListenerSuite test
      8461492 [Kay Ousterhout] Miniscule style fix
      ae04d99 [Kay Ousterhout] Remove input metrics for parallel collections
      719f19d [Kay Ousterhout] Style fixes
      bb6ec62 [Kay Ousterhout] Small fixes
      869ac7b [Kay Ousterhout] Updated Json tests
      44a0301 [Kay Ousterhout] Fixed accidentally added line
      4bd0568 [Kay Ousterhout] Added input source, renamed Hdfs to Hadoop.
      f27e535 [Kay Ousterhout] Updates based on review comments and to fix rebase
      bf41029 [Kay Ousterhout] Updated Json tests to pass
      0fc33e0 [Kay Ousterhout] Added explicit backward compatibility test
      4e52925 [Kay Ousterhout] Added Json output and associated tests.
      365400b [Kay Ousterhout] [SPARK-1683] Track task read metrics.
      7b71a0e0
  3. Jun 29, 2014
  4. Jun 28, 2014
    • Reynold Xin's avatar
      Improve MapOutputTracker error logging. · 2053d793
      Reynold Xin authored
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1258 from rxin/mapOutputTracker and squashes the following commits:
      
      a7c95b6 [Reynold Xin] Improve MapOutputTracker error logging.
      2053d793
    • Matthew Farrellee's avatar
      [SPARK-1394] Remove SIGCHLD handler in worker subprocess · 3c104c79
      Matthew Farrellee authored
      It should not be the responsibility of the worker subprocess, which
      does not intentionally fork, to try and cleanup child processes. Doing
      so is complex and interferes with operations such as
      platform.system().
      
      If it is desirable to have tighter control over subprocesses, then
      namespaces should be used and it should be the manager's resposibility
      to handle cleanup.
      
      Author: Matthew Farrellee <matt@redhat.com>
      
      Closes #1247 from mattf/SPARK-1394 and squashes the following commits:
      
      c36f308 [Matthew Farrellee] [SPARK-1394] Remove SIGCHLD handler in worker subprocess
      3c104c79
    • Guillaume Ballet's avatar
      [SPARK-2233] make-distribution script should list the git hash in the RELEASE file · b8f2e13a
      Guillaume Ballet authored
      This patch adds the git revision hash (short version) to the RELEASE file. It uses git instead of simply checking for the existence of .git, so as to make sure that this is a functional repository.
      
      Author: Guillaume Ballet <gballet@gmail.com>
      
      Closes #1216 from gballet/master and squashes the following commits:
      
      eabc50f [Guillaume Ballet] Refactored the script to take comments into account.
      d93e5e8 [Guillaume Ballet] [SPARK 2233] make-distribution script now lists the git hash tag in the RELEASE file.
      b8f2e13a
  5. Jun 27, 2014
    • Matthew Farrellee's avatar
      [SPARK-2003] Fix python SparkContext example · 0e0686d3
      Matthew Farrellee authored
      Author: Matthew Farrellee <matt@redhat.com>
      
      Closes #1246 from mattf/SPARK-2003 and squashes the following commits:
      
      b12e7ca [Matthew Farrellee] [SPARK-2003] Fix python SparkContext example
      0e0686d3
    • Andrew Or's avatar
      [SPARK-2259] Fix highly misleading docs on cluster / client deploy modes · f17510e3
      Andrew Or authored
      The existing docs are highly misleading. For standalone mode, for example, it encourages the user to use standalone-cluster mode, which is not officially supported. The safeguards have been added in Spark submit itself to prevent bad documentation from leading users down the wrong path in the future.
      
      This PR is prompted by countless headaches users of Spark have run into on the mailing list.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1200 from andrewor14/submit-docs and squashes the following commits:
      
      5ea2460 [Andrew Or] Rephrase cluster vs client explanation
      c827f32 [Andrew Or] Clarify spark submit messages
      9f7ed8f [Andrew Or] Clarify client vs cluster deploy mode + add safeguards
      f17510e3
    • Andrew Or's avatar
      [SPARK-2307] SparkUI - storage tab displays incorrect RDDs · 21e0f77b
      Andrew Or authored
      The issue here is that the `StorageTab` listens for updates from the `StorageStatusListener`, but when a block is kicked out of the cache, `StorageStatusListener` removes it from its list. Thus, there is no way for the `StorageTab` to know whether a block has been dropped.
      
      This issue was introduced in #1080, which was itself a bug fix. Here we revert that PR and offer a different fix for the original bug (SPARK-2144).
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1249 from andrewor14/storage-ui-fix and squashes the following commits:
      
      af019ce [Andrew Or] Fix SPARK-2307
      21e0f77b
  6. Jun 26, 2014
    • witgo's avatar
      SPARK-2181:The keys for sorting the columns of Executor page in SparkUI are incorrect · 18f29b96
      witgo authored
      Author: witgo <witgo@qq.com>
      
      Closes #1135 from witgo/SPARK-2181 and squashes the following commits:
      
      39dad90 [witgo] The keys for sorting the columns of Executor page in SparkUI are incorrect
      18f29b96
    • Xiangrui Meng's avatar
      [SPARK-2251] fix concurrency issues in random sampler · c23f5db3
      Xiangrui Meng authored
      The following code is very likely to throw an exception:
      
      ~~~
      val rdd = sc.parallelize(0 until 111, 10).sample(false, 0.1)
      rdd.zip(rdd).count()
      ~~~
      
      because the same random number generator is used in compute partitions.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1229 from mengxr/fix-sample and squashes the following commits:
      
      f1ee3d7 [Xiangrui Meng] fix concurrency issues in random sampler
      c23f5db3
    • Reynold Xin's avatar
      [SPARK-2297][UI] Make task attempt and speculation more explicit in UI. · d1636dd7
      Reynold Xin authored
      New UI:
      
      ![screen shot 2014-06-26 at 1 43 52 pm](https://cloud.githubusercontent.com/assets/323388/3404643/82b9ddc6-fd73-11e3-96f9-f7592a7aee79.png)
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1236 from rxin/ui-task-attempt and squashes the following commits:
      
      3b645dd [Reynold Xin] Expose attemptId in Stage.
      c0474b1 [Reynold Xin] Beefed up unit test.
      c404bdd [Reynold Xin] Fix ReplayListenerSuite.
      f56be4b [Reynold Xin] Fixed JsonProtocolSuite.
      e29e0f7 [Reynold Xin] Minor update.
      5e4354a [Reynold Xin] [SPARK-2297][UI] Make task attempt and speculation more explicit in UI.
      d1636dd7
    • Reynold Xin's avatar
      Removed throwable field from FetchFailedException and added MetadataFetchFailedException · bf578dea
      Reynold Xin authored
      FetchFailedException used to have a Throwable field, but in reality we never propagate any of the throwable/exceptions back to the driver because Executor explicitly looks for FetchFailedException and then sends FetchFailed as the TaskEndReason.
      
      This pull request removes the throwable and adds a MetadataFetchFailedException that extends FetchFailedException (so now MapOutputTracker throws MetadataFetchFailedException instead).
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1227 from rxin/metadataFetchException and squashes the following commits:
      
      5cb1e0a [Reynold Xin] MetadataFetchFailedException extends FetchFailedException.
      8861ee2 [Reynold Xin] Throw MetadataFetchFailedException in MapOutputTracker.
      bf578dea
    • Cheng Hao's avatar
      [SQL]Extract the joinkeys from join condition · 981bde9b
      Cheng Hao authored
      Extract the join keys from equality conditions, that can be evaluated using equi-join.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #1190 from chenghao-intel/extract_join_keys and squashes the following commits:
      
      4a1060a [Cheng Hao] Fix some of the small issues
      ceb4924 [Cheng Hao] Remove the redundant pattern of join keys extraction
      cec34e8 [Cheng Hao] Update the code style issues
      dcc4584 [Cheng Hao] Extract the joinkeys from join condition
      981bde9b
    • Patrick Wendell's avatar
      Strip '@' symbols when merging pull requests. · f1f7385a
      Patrick Wendell authored
      Currently all of the commits with 'X' in them cause person X to
      receive e-mails every time someone makes a public fork of Spark.
      
      marmbrus who requested this.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #1239 from pwendell/strip and squashes the following commits:
      
      22e5a97 [Patrick Wendell] Strip '@' symbols when merging pull requests.
      f1f7385a
    • Zichuan Ye's avatar
      Fixing AWS instance type information based upon current EC2 data · 62d4a0fa
      Zichuan Ye authored
      Fixed a problem in previous file in which some information regarding AWS instance types were wrong. Such information was updated base upon current AWS EC2 data.
      
      Author: Zichuan Ye <jerry@tangentds.com>
      
      Closes #1156 from jerry86/master and squashes the following commits:
      
      ff36e95 [Zichuan Ye] Fixing AWS instance type information based upon current EC2 data
      62d4a0fa
    • Reynold Xin's avatar
      [SPARK-2286][UI] Report exception/errors for failed tasks that are not ExceptionFailure · 6587ef7c
      Reynold Xin authored
      Also added inline doc for each TaskEndReason.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1225 from rxin/SPARK-2286 and squashes the following commits:
      
      6a7959d [Reynold Xin] Fix unit test failure.
      cf9d5eb [Reynold Xin] Merge branch 'master' into SPARK-2286
      a61fae1 [Reynold Xin] Move to line above ...
      38c7391 [Reynold Xin] [SPARK-2286][UI] Report exception/errors for failed tasks that are not ExceptionFailure.
      6587ef7c
    • Takuya UESHIN's avatar
      [SPARK-2295] [SQL] Make JavaBeans nullability stricter. · 32a1ad75
      Takuya UESHIN authored
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #1235 from ueshin/issues/SPARK-2295 and squashes the following commits:
      
      201c508 [Takuya UESHIN] Make JavaBeans nullability stricter.
      32a1ad75
    • Kay Ousterhout's avatar
      Remove use of spark.worker.instances · 48a82a82
      Kay Ousterhout authored
      spark.worker.instances was added as part of this commit: https://github.com/apache/spark/commit/1617816090e7b20124a512a43860a21232ebf511
      
      My understanding is that SPARK_WORKER_INSTANCES is supported for backwards compatibility,
      but spark.worker.instances is never used (SparkSubmit.scala sets spark.executor.instances) so should
      not have been added.
      
      @sryza @pwendell @tgravescs LMK if I'm understanding this correctly
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #1214 from kayousterhout/yarn_config and squashes the following commits:
      
      3d7c491 [Kay Ousterhout] Remove use of spark.worker.instances
      48a82a82
    • Takuya UESHIN's avatar
      [SPARK-2254] [SQL] ScalaRefection should mark primitive types as non-nullable. · e4899a25
      Takuya UESHIN authored
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #1193 from ueshin/issues/SPARK-2254 and squashes the following commits:
      
      cfd6088 [Takuya UESHIN] Modify ScalaRefection.schemaFor method to return nullability of Scala Type.
      e4899a25
    • Szul, Piotr's avatar
      [SPARK-2172] PySpark cannot import mllib modules in YARN-client mode · 441cdcca
      Szul, Piotr authored
      
      Include pyspark/mllib python sources as resources in the mllib.jar.
      This way they will be included in the final assembly
      
      Author: Szul, Piotr <Piotr.Szul@csiro.au>
      
      Closes #1223 from piotrszul/branch-1.0 and squashes the following commits:
      
      69d5174 [Szul, Piotr] Removed unsed resource directory src/main/resource from mllib pom
      f8c52a0 [Szul, Piotr] [SPARK-2172] PySpark cannot import mllib modules in YARN-client mode Include pyspark/mllib python sources as resources in the jar
      
      (cherry picked from commit fa167194)
      Signed-off-by: default avatarReynold Xin <rxin@apache.org>
      441cdcca
    • Reynold Xin's avatar
      [SPARK-2284][UI] Mark all failed tasks as failures. · 4a346e24
      Reynold Xin authored
      Previously only tasks failed with ExceptionFailure reason was marked as failure.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1224 from rxin/SPARK-2284 and squashes the following commits:
      
      be79dbd [Reynold Xin] [SPARK-2284][UI] Mark all failed tasks as failures.
      4a346e24
  7. Jun 25, 2014
    • Mark Hamstra's avatar
      [SPARK-1749] Job cancellation when SchedulerBackend does not implement killTask · b88a59a6
      Mark Hamstra authored
      This is a fixed up version of #686 (cc @markhamstra @pwendell).  The last commit (the only one I authored) reflects the changes I made from Mark's original patch.
      
      Author: Mark Hamstra <markhamstra@gmail.com>
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #1219 from kayousterhout/mark-SPARK-1749 and squashes the following commits:
      
      42dfa7e [Kay Ousterhout] Got rid of terrible double-negative name
      80b3205 [Kay Ousterhout] Don't notify listeners of job failure if it wasn't successfully cancelled.
      d156d33 [Mark Hamstra] Do nothing in no-kill submitTasks
      9312baa [Mark Hamstra] code review update
      cc353c8 [Mark Hamstra] scalastyle
      e61f7f8 [Mark Hamstra] Catch UnsupportedOperationException when DAGScheduler tries to cancel a job on a SchedulerBackend that does not implement killTask
      b88a59a6
    • Cheng Lian's avatar
      [SPARK-2283][SQL] Reset test environment before running PruningSuite · 7f196b00
      Cheng Lian authored
      JIRA issue: [SPARK-2283](https://issues.apache.org/jira/browse/SPARK-2283)
      
      If `PruningSuite` is run right after `HiveCompatibilitySuite`, the first test case fails because `srcpart` table is cached in-memory by `HiveCompatibilitySuite`, but column pruning is not implemented for `InMemoryColumnarTableScan` operator yet.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1221 from liancheng/spark-2283 and squashes the following commits:
      
      dc0b663 [Cheng Lian] SPARK-2283: reset test environment before running PruningSuite
      7f196b00
    • Zongheng Yang's avatar
      [SQL] SPARK-1800 Add broadcast hash join operator & associated hints. · 9d824fed
      Zongheng Yang authored
      This PR is based off Michael's [PR 734](https://github.com/apache/spark/pull/734) and includes a bunch of cleanups.
      
      Moreover, this PR also
      - makes `SparkLogicalPlan` take a `tableName: String`, which facilitates testing.
      - moves join-related tests to a single file.
      
      Author: Zongheng Yang <zongheng.y@gmail.com>
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1163 from concretevitamin/auto-broadcast-hash-join and squashes the following commits:
      
      d0f4991 [Zongheng Yang] Fix bug in broadcast hash join & add test to cover it.
      af080d7 [Zongheng Yang] Fix in joinIterators()'s next().
      440d277 [Zongheng Yang] Fixes to imports; add back requiredChildDistribution (lost when merging)
      208d5f6 [Zongheng Yang] Make LeftSemiJoinHash mix in HashJoin.
      ad6c7cc [Zongheng Yang] Minor cleanups.
      814b3bf [Zongheng Yang] Merge branch 'master' into auto-broadcast-hash-join
      a8a093e [Zongheng Yang] Minor cleanups.
      6fd8443 [Zongheng Yang] Cut down size estimation related stuff.
      a4267be [Zongheng Yang] Add test for broadcast hash join and related necessary refactorings:
      0e64b08 [Zongheng Yang] Scalastyle fix.
      91461c2 [Zongheng Yang] Merge branch 'master' into auto-broadcast-hash-join
      7c7158b [Zongheng Yang] Prototype of auto conversion to broadcast hash join.
      0ad122f [Zongheng Yang] Merge branch 'master' into auto-broadcast-hash-join
      3e5d77c [Zongheng Yang] WIP: giant and messy WIP.
      a92ed0c [Michael Armbrust] Formatting.
      76ca434 [Michael Armbrust] A simple strategy that broadcasts tables only when they are found in a configuration hint.
      cf6b381 [Michael Armbrust] Split out generic logic for hash joins and create two concrete physical operators: BroadcastHashJoin and ShuffledHashJoin.
      a8420ca [Michael Armbrust] Copy records in executeCollect to avoid issues with mutable rows.
      9d824fed
    • Sebastien Rainville's avatar
      [SPARK-2204] Launch tasks on the proper executors in mesos fine-grained mode · 1132e472
      Sebastien Rainville authored
      The scheduler for Mesos in fine-grained mode launches tasks on the wrong executors. `MesosSchedulerBackend.resourceOffers(SchedulerDriver, List[Offer])` is assuming that `TaskSchedulerImpl.resourceOffers(Seq[WorkerOffer])` is returning task lists in the same order as the offers it was passed, but in the current implementation `TaskSchedulerImpl.resourceOffers` shuffles the offers to avoid assigning the tasks always to the same executors. The result is that the tasks are launched on the wrong executors. The jobs are sometimes able to complete, but most of the time they fail. It seems that as soon as something goes wrong with a task for some reason Spark is not able to recover since it's mistaken as to where the tasks are actually running. Also, it seems that the more the cluster is under load the more likely the job is to fail because there's a higher probability that Spark is trying to launch a task on a slave that doesn't actually have enough resources, again because it's using the wrong offers.
      
      The solution is to not assume that the order in which the tasks are returned is the same as the offers, and simply launch the tasks on the executor decided by `TaskSchedulerImpl.resourceOffers`. What I am not sure about is that I considered slaveId and executorId to be the same, which is true at least in my setup, but I don't know if that is always true.
      
      I tested this on top of the 1.0.0 release and it seems to work fine on our cluster.
      
      Author: Sebastien Rainville <sebastien@hopper.com>
      
      Closes #1140 from sebastienrainville/fine-grained-mode-fix-master and squashes the following commits:
      
      a98b0e0 [Sebastien Rainville] Use a HashMap to retrieve the offer indices
      d6ffe54 [Sebastien Rainville] Launch tasks on the proper executors in mesos fine-grained mode
      1132e472
    • Reynold Xin's avatar
      [SPARK-2270] Kryo cannot serialize results returned by asJavaIterable · 7ff2c754
      Reynold Xin authored
      and thus groupBy/cogroup are broken in Java APIs when Kryo is used).
      
      @pwendell this should be merged into 1.0.1.
      
      Thanks @sorenmacbeth for reporting this & helping out with the fix.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1206 from rxin/kryo-iterable-2270 and squashes the following commits:
      
      09da0aa [Reynold Xin] Updated the comment.
      009bf64 [Reynold Xin] [SPARK-2270] Kryo cannot serialize results returned by asJavaIterable (and thus groupBy/cogroup are broken in Java APIs when Kryo is used).
      7ff2c754
    • Andrew Or's avatar
      [SPARK-2258 / 2266] Fix a few worker UI bugs · 9aa60329
      Andrew Or authored
      **SPARK-2258.** Worker UI displays zombie processes if the executor throws an exception before a process is launched. This is because we only inform the Worker of the change if the process is already launched, which in this case it isn't.
      
      **SPARK-2266.** We expose "Some(app-id)" on the log page. This is fairly minor.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1213 from andrewor14/fix-worker-ui and squashes the following commits:
      
      c1223fe [Andrew Or] Fix worker UI bugs
      9aa60329
    • Andrew Or's avatar
      [SPARK-2242] HOTFIX: pyspark shell hangs on simple job · 5603e4c4
      Andrew Or authored
      This reverts a change introduced in 38702487, which redirected all stderr to the OS pipe instead of directly to the `bin/pyspark` shell output. This causes a simple job to hang in two ways:
      
      1. If the cluster is not configured correctly or does not have enough resources, the job hangs without producing any output, because the relevant warning messages are masked.
      2. If the stderr volume is large, this could lead to a deadlock if we redirect everything to the OS pipe. From the [python docs](https://docs.python.org/2/library/subprocess.html):
      
      ```
      Note Do not use stdout=PIPE or stderr=PIPE with this function as that can deadlock
      based on the child process output volume. Use Popen with the communicate() method
      when you need pipes.
      ```
      
      Note that we cannot remove `stdout=PIPE` in a similar way, because we currently use it to communicate the py4j port. However, it should be fine (as it has been for a long time) because we do not produce a ton of traffic through `stdout`.
      
      That commit was not merged in branch-1.0, so this fix is for master only.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1178 from andrewor14/fix-python and squashes the following commits:
      
      e68e870 [Andrew Or] Merge branch 'master' of github.com:apache/spark into fix-python
      20849a8 [Andrew Or] Tone down stdout interference message
      a09805b [Andrew Or] Return more than 1 line of error message to user
      6dfbd1e [Andrew Or] Don't swallow original exception
      0d1861f [Andrew Or] Provide more helpful output if stdout is garbled
      21c9d7c [Andrew Or] Do not mask stderr from output
      5603e4c4
    • Reynold Xin's avatar
      ac06a85d
    • CodingCat's avatar
      SPARK-2038: rename "conf" parameters in the saveAsHadoop functions with source-compatibility · acc01ab3
      CodingCat authored
      https://issues.apache.org/jira/browse/SPARK-2038
      
      to differentiate with SparkConf object and at the same time keep the source level compatibility
      
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #1137 from CodingCat/SPARK-2038 and squashes the following commits:
      
      11abeba [CodingCat] revise the comments
      7ee5712 [CodingCat] to keep the source-compatibility
      763975f [CodingCat] style fix
      d91288d [CodingCat] rename "conf" parameters in the saveAsHadoop functions
      acc01ab3
    • Cheng Lian's avatar
      [BUGFIX][SQL] Should match java.math.BigDecimal when wnrapping Hive output · 22036aeb
      Cheng Lian authored
      The `BigDecimal` branch in `unwrap` matches to `scala.math.BigDecimal` rather than `java.math.BigDecimal`.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1199 from liancheng/javaBigDecimal and squashes the following commits:
      
      e9bb481 [Cheng Lian] Should match java.math.BigDecimal when wnrapping Hive output
      22036aeb
    • Cheng Lian's avatar
      [SPARK-2263][SQL] Support inserting MAP<K, V> to Hive tables · 8fade897
      Cheng Lian authored
      JIRA issue: [SPARK-2263](https://issues.apache.org/jira/browse/SPARK-2263)
      
      Map objects were not converted to Hive types before inserting into Hive tables.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1205 from liancheng/spark-2263 and squashes the following commits:
      
      c7a4373 [Cheng Lian] Addressed @concretevitamin's comment
      784940b [Cheng Lian] SARPK-2263: support inserting MAP<K, V> to Hive tables
      8fade897
  8. Jun 24, 2014
Loading