Skip to content
Snippets Groups Projects
  1. Jun 24, 2016
  2. Jun 22, 2016
    • Ahmed Mahran's avatar
      [SPARK-16120][STREAMING] getCurrentLogFiles in ReceiverSuite WAL generating... · c2cebdb7
      Ahmed Mahran authored
      [SPARK-16120][STREAMING] getCurrentLogFiles in ReceiverSuite WAL generating and cleaning case uses external variable instead of the passed parameter
      
      ## What changes were proposed in this pull request?
      
      In `ReceiverSuite.scala`, in the test case "write ahead log - generating and cleaning", the inner method `getCurrentLogFiles` uses external variable `logDirectory1` instead of the passed parameter `logDirectory`. This PR fixes this by using the passed method argument instead of variable from the outer scope.
      
      ## How was this patch tested?
      
      The unit test was re-run and the output logs were checked for the correct paths used.
      
      tdas
      
      Author: Ahmed Mahran <ahmed.mahran@mashin.io>
      
      Closes #13825 from ahmed-mahran/b-receiver-suite-wal-gen-cln.
      c2cebdb7
  3. Jun 12, 2016
    • Sean Owen's avatar
      [SPARK-15086][CORE][STREAMING] Deprecate old Java accumulator API · f51dfe61
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      - Deprecate old Java accumulator API; should use Scala now
      - Update Java tests and examples
      - Don't bother testing old accumulator API in Java 8 (too)
      - (fix a misspelling too)
      
      ## How was this patch tested?
      
      Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13606 from srowen/SPARK-15086.
      f51dfe61
  4. Jun 10, 2016
  5. Jun 06, 2016
    • Zheng RuiFeng's avatar
      [MINOR] Fix Typos 'an -> a' · fd8af397
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      
      `an -> a`
      
      Use cmds like `find . -name '*.R' | xargs -i sh -c "grep -in ' an [^aeiou]' {} && echo {}"` to generate candidates, and review them one by one.
      
      ## How was this patch tested?
      manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #13515 from zhengruifeng/an_a.
      fd8af397
  6. Jun 05, 2016
    • Josh Rosen's avatar
      [SPARK-15748][SQL] Replace inefficient foldLeft() call with flatMap() in PartitionStatistics · 26c1089c
      Josh Rosen authored
      `PartitionStatistics` uses `foldLeft` and list concatenation (`++`) to flatten an iterator of lists, but this is extremely inefficient compared to simply doing `flatMap`/`flatten` because it performs many unnecessary object allocations. Simply replacing this `foldLeft` by a `flatMap` results in decent performance gains when constructing PartitionStatistics instances for tables with many columns.
      
      This patch fixes this and also makes two similar changes in MLlib and streaming to try to fix all known occurrences of this pattern.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #13491 from JoshRosen/foldleft-to-flatmap.
      26c1089c
  7. May 30, 2016
  8. May 27, 2016
    • Zheng RuiFeng's avatar
      [MINOR] Fix Typos 'a -> an' · 6b1a6180
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      
      `a` -> `an`
      
      I use regex to generate potential error lines:
      `grep -in ' a [aeiou]' mllib/src/main/scala/org/apache/spark/ml/*/*scala`
      and review them line by line.
      
      ## How was this patch tested?
      
      local build
      `lint-java` checking
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #13317 from zhengruifeng/a_an.
      6b1a6180
  9. May 25, 2016
    • lfzCarlosC's avatar
      [MINOR][MLLIB][STREAMING][SQL] Fix typos · 02c8072e
      lfzCarlosC authored
      fixed typos for source code for components [mllib] [streaming] and [SQL]
      
      None and obvious.
      
      Author: lfzCarlosC <lfz.carlos@gmail.com>
      
      Closes #13298 from lfzCarlosC/master.
      02c8072e
  10. May 17, 2016
  11. May 15, 2016
    • Sean Owen's avatar
      [SPARK-12972][CORE] Update org.apache.httpcomponents.httpclient · f5576a05
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      (Retry of https://github.com/apache/spark/pull/13049)
      
      - update to httpclient 4.5 / httpcore 4.4
      - remove some defunct exclusions
      - manage httpmime version to match
      - update selenium / httpunit to support 4.5 (possible now that Jetty 9 is used)
      
      ## How was this patch tested?
      
      Jenkins tests. Also, locally running the same test command of one Jenkins profile that failed: `mvn -Phadoop-2.6 -Pyarn -Phive -Phive-thriftserver -Pkinesis-asl ...`
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13117 from srowen/SPARK-12972.2.
      f5576a05
  12. May 12, 2016
    • bomeng's avatar
      [SPARK-14897][SQL] upgrade to jetty 9.2.16 · 81bf8708
      bomeng authored
      ## What changes were proposed in this pull request?
      
      Since Jetty 8 is EOL (end of life) and has critical security issue [http://www.securityweek.com/critical-vulnerability-found-jetty-web-server], I think upgrading to 9 is necessary. I am using latest 9.2 since 9.3 requires Java 8+.
      
      `javax.servlet` and `derby` were also upgraded since Jetty 9.2 needs corresponding version.
      
      ## How was this patch tested?
      
      Manual test and current test cases should cover it.
      
      Author: bomeng <bmeng@us.ibm.com>
      
      Closes #12916 from bomeng/SPARK-14897.
      81bf8708
  13. May 11, 2016
    • mwws's avatar
      [SPARK-14976][STREAMING] make StreamingContext.textFileStream support wildcard · 33597810
      mwws authored
      ## What changes were proposed in this pull request?
      make StreamingContext.textFileStream support wildcard
      like /home/user/*/file
      
      ## How was this patch tested?
      I did manual test and added a new unit test case
      
      Author: mwws <wei.mao@intel.com>
      Author: unknown <maowei@maowei-MOBL.ccr.corp.intel.com>
      
      Closes #12752 from mwws/SPARK_FileStream.
      33597810
  14. May 09, 2016
    • mwws's avatar
      [MINOR][TEST][STREAMING] make "testDir" able to be claened after test. · 16a503cf
      mwws authored
      It's a minor bug in test case. `val testDir = null` will keep be `null` as it's immutable, so in finally block, nothing will be cleaned. Another `testDir` variable created in try block is only visible in try block.
      
      ## How was this patch tested?
      Run existing test case and passed.
      
      Author: mwws <wei.mao@intel.com>
      
      Closes #12999 from mwws/SPARK_MINOR.
      16a503cf
  15. May 06, 2016
    • Thomas Graves's avatar
      [SPARK-1239] Improve fetching of map output statuses · cc95f1ed
      Thomas Graves authored
      The main issue we are trying to solve is the memory bloat of the Driver when tasks request the map output statuses.  This means with a large number of tasks you either need a huge amount of memory on Driver or you have to repartition to smaller number.  This makes it really difficult to run over say 50000 tasks.
      
      The main issues that cause the memory bloat are:
      1) no flow control on sending the map output status responses.  We serialize the map status output  and then hand off to netty to send.  netty is sending asynchronously and it can't send them fast enough to keep up with incoming requests so we end up with lots of copies of the serialized map output statuses sitting there and this causes huge bloat when you have 10's of thousands of tasks and map output status is in the 10's of MB.
      2) When initial reduce tasks are started up, they all request the map output statuses from the Driver. These requests are handled by multiple threads in parallel so even though we check to see if we have a cached version, initially when we don't have a cached version yet, many of initial requests can all end up serializing the exact same map output statuses.
      
      This patch does a couple of things:
      - When the map output status size is over a threshold (default 512K) then it uses broadcast to send the map statuses.  This means we no longer serialize a large map output status and thus we don't have issues with memory bloat.  the messages sizes are now in the 300-400 byte range and the map status output are broadcast. If its under the threadshold it sends it as before, the message contains the DIRECT indicator now.
      - synchronize the incoming requests to allow one thread to cache the serialized output and broadcast the map output status  that can then be used by everyone else.  This ensures we don't create multiple broadcast variables when we don't need to.  To ensure this happens I added a second thread pool which the Dispatcher hands the requests to so that those threads can block without blocking the main dispatcher threads (which would cause things like heartbeats and such not to come through)
      
      Note that some of design and code was contributed by mridulm
      
      ## How was this patch tested?
      
      Unit tests and a lot of manually testing.
      Ran with akka and netty rpc. Ran with both dynamic allocation on and off.
      
      one of the large jobs I used to test this was a join of 15TB of data.  it had 200,000 map tasks, and  20,000 reduce tasks. Executors ranged from 200 to 2000.  This job ran successfully with 5GB of memory on the driver with these changes. Without these changes I was using 20GB and only had 500 reduce tasks.  The job has 50mb of serialized map output statuses and took roughly the same amount of time for the executors to get the map output statuses as before.
      
      Ran a variety of other jobs, from large wordcounts to small ones not using broadcasts.
      
      Author: Thomas Graves <tgraves@staydecay.corp.gq1.yahoo.com>
      
      Closes #12113 from tgravescs/SPARK-1239.
      cc95f1ed
  16. May 05, 2016
  17. May 03, 2016
    • François Garillot's avatar
      [SPARK-9819][STREAMING][DOCUMENTATION] Clarify doc for invReduceFunc in... · 439e3610
      François Garillot authored
      [SPARK-9819][STREAMING][DOCUMENTATION] Clarify doc for invReduceFunc in incremental versions of reduceByWindow
      
      - that reduceFunc and invReduceFunc should be associative
      - that the intermediate result in iterated applications of inverseReduceFunc
        is its first argument
      
      Author: François Garillot <francois@garillot.net>
      
      Closes #8103 from huitseeker/issue/invReduceFuncDoc.
      439e3610
  18. Apr 28, 2016
  19. Apr 27, 2016
    • Josh Rosen's avatar
      [SPARK-14930][SPARK-13693] Fix race condition in CheckpointWriter.stop() · 450136ec
      Josh Rosen authored
      CheckpointWriter.stop() is prone to a race condition: if one thread calls `stop()` right as a checkpoint write task begins to execute, that write task may become blocked when trying to access `fs`, the shared Hadoop FileSystem, since both the `fs` getter and `stop` method synchronize on the same lock. Here's a thread-dump excerpt which illustrates the problem:
      
      ```java
      "pool-31-thread-1" #156 prio=5 os_prio=31 tid=0x00007fea02cd2000 nid=0x5c0b waiting for monitor entry [0x000000013bc4c000]
         java.lang.Thread.State: BLOCKED (on object monitor)
          at org.apache.spark.streaming.CheckpointWriter.org$apache$spark$streaming$CheckpointWriter$$fs(Checkpoint.scala:302)
          - waiting to lock <0x00000007bf53ee78> (a org.apache.spark.streaming.CheckpointWriter)
          at org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler.run(Checkpoint.scala:224)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          at java.lang.Thread.run(Thread.java:745)
      
      "pool-1-thread-1-ScalaTest-running-MapWithStateSuite" #11 prio=5 os_prio=31 tid=0x00007fe9ff879800 nid=0x5703 waiting on condition [0x000000012e54c000]
         java.lang.Thread.State: TIMED_WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)
          - parking to wait for  <0x00000007bf564568> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
          at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
          at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
          at java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
          at org.apache.spark.streaming.CheckpointWriter.stop(Checkpoint.scala:291)
          - locked <0x00000007bf53ee78> (a org.apache.spark.streaming.CheckpointWriter)
          at org.apache.spark.streaming.scheduler.JobGenerator.stop(JobGenerator.scala:159)
          - locked <0x00000007bf53ea90> (a org.apache.spark.streaming.scheduler.JobGenerator)
          at org.apache.spark.streaming.scheduler.JobScheduler.stop(JobScheduler.scala:115)
          - locked <0x00000007bf53d3f0> (a org.apache.spark.streaming.scheduler.JobScheduler)
          at org.apache.spark.streaming.StreamingContext$$anonfun$stop$1.apply$mcV$sp(StreamingContext.scala:680)
          at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
          at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:679)
          - locked <0x00000007bf516a70> (a org.apache.spark.streaming.StreamingContext)
          at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:644)
          - locked <0x00000007bf516a70> (a org.apache.spark.streaming.StreamingContext)
      [...]
      ```
      
      We can fix this problem by having `stop` and `fs` be synchronized on different locks: the synchronization on `stop` only needs to guard against multiple threads calling `stop` at the same time, whereas the synchronization on `fs` is only necessary for cross-thread visibility. There's only ever a single active checkpoint writer thread at a time, so we don't need to guard against concurrent access to `fs`. Thus, `fs` can simply become a `volatile` var, similar to `lastCheckpointTime`.
      
      This change should fix [SPARK-13693](https://issues.apache.org/jira/browse/SPARK-13693), a flaky `MapWithStateSuite` test suite which has recently been failing several times per day. It also results in a huge test speedup: prior to this patch, `MapWithStateSuite` took about 80 seconds to run, whereas it now runs in less than 10 seconds. For the `streaming` project's tests as a whole, they now run in ~220 seconds vs. ~354 before.
      
      /cc zsxwing and tdas for review.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #12712 from JoshRosen/fix-checkpoint-writer-race.
      450136ec
  20. Apr 26, 2016
    • Jacek Laskowski's avatar
      [MINOR][DOCS] Minor typo fixes · b208229b
      Jacek Laskowski authored
      ## What changes were proposed in this pull request?
      
      Minor typo fixes (too minor to deserve separate a JIRA)
      
      ## How was this patch tested?
      
      local build
      
      Author: Jacek Laskowski <jacek@japila.pl>
      
      Closes #12469 from jaceklaskowski/minor-typo-fixes.
      b208229b
  21. Apr 24, 2016
    • Dongjoon Hyun's avatar
      [SPARK-14868][BUILD] Enable NewLineAtEofChecker in checkstyle and fix lint-java errors · d34d6503
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Spark uses `NewLineAtEofChecker` rule in Scala by ScalaStyle. And, most Java code also comply with the rule. This PR aims to enforce the same rule `NewlineAtEndOfFile` by CheckStyle explicitly. Also, this fixes lint-java errors since SPARK-14465. The followings are the items.
      
      - Adds a new line at the end of the files (19 files)
      - Fixes 25 lint-java errors (12 RedundantModifier, 6 **ArrayTypeStyle**, 2 LineLength, 2 UnusedImports, 2 RegexpSingleline, 1 ModifierOrder)
      
      ## How was this patch tested?
      
      After the Jenkins test succeeds, `dev/lint-java` should pass. (Currently, Jenkins dose not run lint-java.)
      ```bash
      $ dev/lint-java
      Using `mvn` from path: /usr/local/bin/mvn
      Checkstyle checks passed.
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12632 from dongjoon-hyun/SPARK-14868.
      d34d6503
  22. Apr 22, 2016
    • Liwei Lin's avatar
      [SPARK-14701][STREAMING] First stop the event loop, then stop the checkpoint writer in JobGenerator · fde1340c
      Liwei Lin authored
      Currently if we call `streamingContext.stop` (e.g. in a `StreamingListener.onBatchCompleted` callback) when a batch is about to complete, a `rejectedException` may get thrown from `checkPointWriter.executor`, since the `eventLoop` will try to process `DoCheckpoint` events even after the `checkPointWriter.executor` was stopped.
      
      Please see [SPARK-14701](https://issues.apache.org/jira/browse/SPARK-14701) for details and stack traces.
      
      ## What changes were proposed in this pull request?
      
      Reversed the stopping order of `event loop` and `checkpoint writer`.
      
      ## How was this patch tested?
      
      Existing test suits.
      (no dedicated test suits were added because the change is simple to reason about)
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12489 from lw-lin/spark-14701.
      fde1340c
    • Joan's avatar
      [SPARK-6429] Implement hashCode and equals together · bf95b8da
      Joan authored
      ## What changes were proposed in this pull request?
      
      Implement some `hashCode` and `equals` together in order to enable the scalastyle.
      This is a first batch, I will continue to implement them but I wanted to know your thoughts.
      
      Author: Joan <joan@goyeau.com>
      
      Closes #12157 from joan38/SPARK-6429-HashCode-Equals.
      bf95b8da
  23. Apr 21, 2016
    • Sean Owen's avatar
      [SPARK-8393][STREAMING] JavaStreamingContext#awaitTermination() throws... · 8bd05c9d
      Sean Owen authored
      [SPARK-8393][STREAMING] JavaStreamingContext#awaitTermination() throws non-declared InterruptedException
      
      ## What changes were proposed in this pull request?
      
      `JavaStreamingContext.awaitTermination` methods should be declared as `throws[InterruptedException]` so that this exception can be handled in Java code. Note this is not just a doc change, but an API change, since now (in Java) the method has a checked exception to handle. All await-like methods in Java APIs behave this way, so seems worthwhile for 2.0.
      
      ## How was this patch tested?
      
      Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #12418 from srowen/SPARK-8393.
      8bd05c9d
  24. Apr 19, 2016
  25. Apr 18, 2016
    • Josh Rosen's avatar
      [SPARK-14719] WriteAheadLogBasedBlockHandler should ignore BlockManager put errors · ed2de029
      Josh Rosen authored
      WriteAheadLogBasedBlockHandler will currently throw exceptions if its BlockManager `put()` calls fail, even though those calls are only performed as a performance optimization. Instead, it should log and ignore exceptions during that `put()`.
      
      This is a longstanding issue that was masked by an incorrect test case. I think that we haven't noticed this in production because
      
      1. most people probably use a `MEMORY_AND_DISK` storage level, and
      2. typically, individual blocks may be small enough relative to the total storage memory such that they're able to evict blocks from previous batches, so `put()` failures here may be rare in practice.
      
      This patch fixes the faulty test and fixes the bug.
      
      /cc tdas
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #12484 from JoshRosen/received-block-hadndler-fix.
      ed2de029
    • Reynold Xin's avatar
      [SPARK-14667] Remove HashShuffleManager · 5e92583d
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      The sort shuffle manager has been the default since Spark 1.2. It is time to remove the old hash shuffle manager.
      
      ## How was this patch tested?
      Removed some tests related to the old manager.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12423 from rxin/SPARK-14667.
      5e92583d
  26. Apr 16, 2016
    • hyukjinkwon's avatar
      [MINOR] Remove inappropriate type notation and extra anonymous closure within... · 9f678e97
      hyukjinkwon authored
      [MINOR] Remove inappropriate type notation and extra anonymous closure within functional transformations
      
      ## What changes were proposed in this pull request?
      
      This PR removes
      
      - Inappropriate type notations
          For example, from
          ```scala
          words.foreachRDD { (rdd: RDD[String], time: Time) =>
          ...
          ```
          to
          ```scala
          words.foreachRDD { (rdd, time) =>
          ...
          ```
      
      - Extra anonymous closure within functional transformations.
          For example,
          ```scala
          .map(item => {
            ...
          })
          ```
      
          which can be just simply as below:
      
          ```scala
          .map { item =>
            ...
          }
          ```
      
      and corrects some obvious style nits.
      
      ## How was this patch tested?
      
      This was tested after adding rules in `scalastyle-config.xml`, which ended up with not finding all perfectly.
      
      The rules applied were below:
      
      - For the first correction,
      
      ```xml
      <check customId="NoExtraClosure" level="error" class="org.scalastyle.file.RegexChecker" enabled="true">
          <parameters><parameter name="regex">(?m)\.[a-zA-Z_][a-zA-Z0-9]*\(\s*[^,]+s*=>\s*\{[^\}]+\}\s*\)</parameter></parameters>
      </check>
      ```
      
      ```xml
      <check customId="NoExtraClosure" level="error" class="org.scalastyle.file.RegexChecker" enabled="true">
          <parameters><parameter name="regex">\.[a-zA-Z_][a-zA-Z0-9]*\s*[\{|\(]([^\n>,]+=>)?\s*\{([^()]|(?R))*\}^[,]</parameter></parameters>
      </check>
      ```
      
      - For the second correction
      ```xml
      <check customId="TypeNotation" level="error" class="org.scalastyle.file.RegexChecker" enabled="true">
          <parameters><parameter name="regex">\.[a-zA-Z_][a-zA-Z0-9]*\s*[\{|\(]\s*\([^):]*:R))*\}^[,]</parameter></parameters>
      </check>
      ```
      
      **Those rules were not added**
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #12413 from HyukjinKwon/SPARK-style.
      9f678e97
  27. Apr 14, 2016
  28. Apr 12, 2016
  29. Apr 11, 2016
    • Eric Liang's avatar
      [SPARK-14475] Propagate user-defined context from driver to executors · 6f27027d
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      This adds a new API call `TaskContext.getLocalProperty` for getting properties set in the driver from executors. These local properties are automatically propagated from the driver to executors. For streaming, the context for streaming tasks will be the initial driver context when ssc.start() is called.
      
      ## How was this patch tested?
      
      Unit tests.
      
      cc JoshRosen
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #12248 from ericl/sc-2813.
      6f27027d
  30. Apr 10, 2016
    • jerryshao's avatar
      [SPARK-14455][STREAMING] Fix NPE in allocatedExecutors when calling in receiver-less scenario · 2c95e4e9
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      When calling `ReceiverTracker#allocatedExecutors` in receiver-less scenario, NPE will be thrown, since this `ReceiverTracker` actually is not started and `endpoint` is not created.
      
      This will be happened when playing streaming dynamic allocation with direct Kafka.
      
      ## How was this patch tested?
      
      Local integrated test is done.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #12236 from jerryshao/SPARK-14455.
      2c95e4e9
  31. Apr 08, 2016
    • Shixiong Zhu's avatar
      [SPARK-14437][CORE] Use the address that NettyBlockTransferService listens to create BlockManagerId · 4d7c3592
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Here is why SPARK-14437 happens:
      BlockManagerId is created using NettyBlockTransferService.hostName which comes from `customHostname`. And `Executor` will set `customHostname` to the hostname which is detected by the driver. However, the driver may not be able to detect the correct address in some complicated network (Netty's Channel.remoteAddress doesn't always return a connectable address). In such case, `BlockManagerId` will be created using a wrong hostname.
      
      To fix this issue, this PR uses `hostname` provided by `SparkEnv.create` to create `NettyBlockTransferService` and set `NettyBlockTransferService.hostname` to this one directly. A bonus of this approach is NettyBlockTransferService won't bound to `0.0.0.0` which is much safer.
      
      ## How was this patch tested?
      
      Manually checked the bound address using local-cluster.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12240 from zsxwing/SPARK-14437.
      4d7c3592
  32. Apr 06, 2016
    • Marcelo Vanzin's avatar
      [SPARK-14134][CORE] Change the package name used for shading classes. · 21d5ca12
      Marcelo Vanzin authored
      The current package name uses a dash, which is a little weird but seemed
      to work. That is, until a new test tried to mock a class that references
      one of those shaded types, and then things started failing.
      
      Most changes are just noise to fix the logging configs.
      
      For reference, SPARK-8815 also raised this issue, although at the time it
      did not cause any issues in Spark, so it was not addressed.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #11941 from vanzin/SPARK-14134.
      21d5ca12
Loading