Skip to content
Snippets Groups Projects
  1. Jun 10, 2015
    • WangTaoTheTonic's avatar
      [SPARK-8273] Driver hangs up when yarn shutdown in client mode · 5014d0ed
      WangTaoTheTonic authored
      In client mode, if yarn was shut down with spark application running, the application will hang up after several retries(default: 30) because the exception throwed by YarnClientImpl could not be caught by upper level, we should exit in case that user can not be aware that.
      
      The exception we wanna catch is [here](https://github.com/apache/hadoop/blob/branch-2.7.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java#L122), and I try to fix it refer to [MR](https://github.com/apache/hadoop/blob/branch-2.7.0/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java#L320).
      
      Author: WangTaoTheTonic <wangtao111@huawei.com>
      
      Closes #6717 from WangTaoTheTonic/SPARK-8273 and squashes the following commits:
      
      28752d6 [WangTaoTheTonic] catch the throwed exception
      5014d0ed
    • WangTaoTheTonic's avatar
      [SPARK-8290] spark class command builder need read SPARK_JAVA_OPTS and SPARK_DRIVER_MEMORY properly · cb871c44
      WangTaoTheTonic authored
      SPARK_JAVA_OPTS was missed in reconstructing the launcher part, we should add it back so process launched by spark-class could read it properly. And so does `SPARK_DRIVER_MEMORY`.
      
      The missing part is [here](https://github.com/apache/spark/blob/1c30afdf94b27e1ad65df0735575306e65d148a1/bin/spark-class#L97).
      
      Author: WangTaoTheTonic <wangtao111@huawei.com>
      Author: Tao Wang <wangtao111@huawei.com>
      
      Closes #6741 from WangTaoTheTonic/SPARK-8290 and squashes the following commits:
      
      bd89f0f [Tao Wang] make sure the memory setting is right too
      e313520 [WangTaoTheTonic] spark class command builder need read SPARK_JAVA_OPTS
      cb871c44
    • zsxwing's avatar
      [SPARK-7261] [CORE] Change default log level to WARN in the REPL · 80043e9e
      zsxwing authored
      1. Add `log4j-defaults-repl.properties` that has log level WARN.
      2. When logging is initialized, check whether inside the REPL. If so, use `log4j-defaults-repl.properties`.
      3. Print the following information if using `log4j-defaults-repl.properties`:
      ```
      Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
      To adjust logging level use sc.setLogLevel("INFO")
      ```
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #6734 from zsxwing/log4j-repl and squashes the following commits:
      
      3835eff [zsxwing] Change default log level to WARN in the REPL
      80043e9e
    • zsxwing's avatar
      [SPARK-7527] [CORE] Fix createNullValue to return the correct null values and REPL mode detection · e90c9d92
      zsxwing authored
      The root cause of SPARK-7527 is `createNullValue` returns an incompatible value `Byte(0)` for `char` and `boolean`.
      
      This PR fixes it and corrects the class name of the main class, and also adds an unit test to demonstrate it.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #6735 from zsxwing/SPARK-7527 and squashes the following commits:
      
      bbdb271 [zsxwing] Use pattern match in createNullValue
      b0a0e7e [zsxwing] Remove the noisy in the test output
      903e269 [zsxwing] Remove the code for Utils.isInInterpreter == false
      5f92dc1 [zsxwing] Fix createNullValue to return the correct null values and REPL mode detection
      e90c9d92
    • Adam Roberts's avatar
      [SPARK-7756] CORE RDDOperationScope fix for IBM Java · 19e30b48
      Adam Roberts authored
      IBM Java has an extra method when we do getStackTrace(): this is "getStackTraceImpl", a native method. This causes two tests to fail within "DStreamScopeSuite" when running with IBM Java. Instead of "map" or "filter" being the method names found, "getStackTrace" is returned. This commit addresses such an issue by using dropWhile. Given that our current method is withScope, we look for the next method that isn't ours: we don't care about methods that come before us in the stack trace: e.g. getStackTrace (regardless of how many levels this might go).
      
      IBM:
      java.lang.Thread.getStackTraceImpl(Native Method)
      java.lang.Thread.getStackTrace(Thread.java:1117)
      org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:104)
      
      Oracle:
      PRINTING STACKTRACE!!!
      java.lang.Thread.getStackTrace(Thread.java:1552)
      org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:106)
      
      I've tested this with Oracle and IBM Java, no side effects for other tests introduced.
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      Author: a-roberts <aroberts@uk.ibm.com>
      
      Closes #6740 from a-roberts/RDDScopeStackCrawlFix and squashes the following commits:
      
      13ce390 [Adam Roberts] Ensure consistency with String equality checking
      a4fc0e0 [a-roberts] Update RDDOperationScope.scala
      19e30b48
    • Hossein's avatar
      [SPARK-8282] [SPARKR] Make number of threads used in RBackend configurable · 30ebf1a2
      Hossein authored
      Read number of threads for RBackend from configuration.
      
      [SPARK-8282] #comment Linking with JIRA
      
      Author: Hossein <hossein@databricks.com>
      
      Closes #6730 from falaki/SPARK-8282 and squashes the following commits:
      
      33b3d98 [Hossein] Documented new config parameter
      70f2a9c [Hossein] Fixing import
      ec44225 [Hossein] Read number of threads for RBackend from configuration
      30ebf1a2
    • Marcelo Vanzin's avatar
      [SPARK-5479] [YARN] Handle --py-files correctly in YARN. · 38112905
      Marcelo Vanzin authored
      The bug description is a little misleading: the actual issue is that
      .py files are not handled correctly when distributed by YARN. They're
      added to "spark.submit.pyFiles", which, when processed by context.py,
      explicitly whitelists certain extensions (see PACKAGE_EXTENSIONS),
      and that does not include .py files.
      
      On top of that, archives were not handled at all! They made it to the
      driver's python path, but never made it to executors, since the mechanism
      used to propagate their location (spark.submit.pyFiles) only works on
      the driver side.
      
      So, instead, ignore "spark.submit.pyFiles" and just build PYTHONPATH
      correctly for both driver and executors. Individual .py files are
      placed in a subdirectory of the container's local dir in the cluster,
      which is then added to the python path. Archives are added directly.
      
      The change, as a side effect, ends up solving the symptom described
      in the bug. The issue was not that the files were not being distributed,
      but that they were never made visible to the python application
      running under Spark.
      
      Also included is a proper unit test for running python on YARN, which
      broke in several different ways with the previous code.
      
      A short walk around of the changes:
      - SparkSubmit does not try to be smart about how YARN handles python
        files anymore. It just passes down the configs to the YARN client
        code.
      - The YARN client distributes python files and archives differently,
        placing the files in a subdirectory.
      - The YARN client now sets PYTHONPATH for the processes it launches;
        to properly handle different locations, it uses YARN's support for
        embedding env variables, so to avoid YARN expanding those at the
        wrong time, SparkConf is now propagated to the AM using a conf file
        instead of command line options.
      - Because the Client initialization code is a maze of implicit
        dependencies, some code needed to be moved around to make sure
        all needed state was available when the code ran.
      - The pyspark tests in YarnClusterSuite now actually distribute and try
        to use both a python file and an archive containing a different python
        module. Also added a yarn-client tests for completeness.
      - I cleaned up some of the code around distributing files to YARN, to
        avoid adding more copied & pasted code to handle the new files being
        distributed.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #6360 from vanzin/SPARK-5479 and squashes the following commits:
      
      bcaf7e6 [Marcelo Vanzin] Feedback.
      c47501f [Marcelo Vanzin] Fix yarn-client mode.
      46b1d0c [Marcelo Vanzin] Merge branch 'master' into SPARK-5479
      c743778 [Marcelo Vanzin] Only pyspark cares about python archives.
      c8e5a82 [Marcelo Vanzin] Actually run pyspark in client mode.
      705571d [Marcelo Vanzin] Move some code to the YARN module.
      1dd4d0c [Marcelo Vanzin] Review feedback.
      71ee736 [Marcelo Vanzin] Merge branch 'master' into SPARK-5479
      220358b [Marcelo Vanzin] Scalastyle.
      cdbb990 [Marcelo Vanzin] Merge branch 'master' into SPARK-5479
      7fe3cd4 [Marcelo Vanzin] No need to distribute primary file to executors.
      09045f1 [Marcelo Vanzin] Style.
      943cbf4 [Marcelo Vanzin] [SPARK-5479] [yarn] Handle --py-files correctly in YARN.
      38112905
    • Cheng Lian's avatar
      [SQL] [MINOR] Fixes a minor Java example error in SQL programming guide · 8f7308f9
      Cheng Lian authored
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6749 from liancheng/java-sample-fix and squashes the following commits:
      
      5b44585 [Cheng Lian] Fixes a minor Java example error in SQL programming guide
      8f7308f9
    • Ilya Ganelin's avatar
      [SPARK-7996] Deprecate the developer api SparkEnv.actorSystem · 2b550a52
      Ilya Ganelin authored
      Changed ```SparkEnv.actorSystem``` to be a function such that we can use the deprecated flag with it and added a deprecated message.
      
      Author: Ilya Ganelin <ilya.ganelin@capitalone.com>
      
      Closes #6731 from ilganeli/SPARK-7996 and squashes the following commits:
      
      be43817 [Ilya Ganelin] Restored to val
      9ed89e7 [Ilya Ganelin] Added a version info for deprecation
      9610b08 [Ilya Ganelin] Converted actorSystem to function and added deprecated flag
      2b550a52
    • Daoyuan Wang's avatar
      [SPARK-8215] [SPARK-8212] [SQL] add leaf math expression for e and pi · c6ba7cca
      Daoyuan Wang authored
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #6716 from adrian-wang/epi and squashes the following commits:
      
      e2e8dbd [Daoyuan Wang] move tests
      11b351c [Daoyuan Wang] add tests and remove pu
      db331c9 [Daoyuan Wang] py style
      599ddd8 [Daoyuan Wang] add py
      e6783ef [Daoyuan Wang] register function
      82d426e [Daoyuan Wang] add function entry
      dbf3ab5 [Daoyuan Wang] add PI and E
      c6ba7cca
    • Reynold Xin's avatar
      [SPARK-7886] Added unit test for HAVING aggregate pushdown. · e90035e6
      Reynold Xin authored
      This is a followup to #6712.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6739 from rxin/6712-followup and squashes the following commits:
      
      fd9acfb [Reynold Xin] [SPARK-7886] Added unit test for HAVING aggregate pushdown.
      e90035e6
    • Reynold Xin's avatar
      [SPARK-7886] Use FunctionRegistry for built-in expressions in HiveContext. · 57c60c5b
      Reynold Xin authored
      This builds on #6710 and also uses FunctionRegistry for function lookup in HiveContext.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6712 from rxin/udf-registry-hive and squashes the following commits:
      
      f4c2df0 [Reynold Xin] Fixed style violation.
      0bd4127 [Reynold Xin] Fixed Python UDFs.
      f9a0378 [Reynold Xin] Disable one more test.
      5609494 [Reynold Xin] Disable some failing tests.
      4efea20 [Reynold Xin] Don't check children resolved for UDF resolution.
      2ebe549 [Reynold Xin] Removed more hardcoded functions.
      aadce78 [Reynold Xin] [SPARK-7886] Use FunctionRegistry for built-in expressions in HiveContext.
      57c60c5b
  2. Jun 09, 2015
  3. Jun 08, 2015
    • hqzizania's avatar
      [SPARK-6820] [SPARKR] Convert NAs to null type in SparkR DataFrames · a5c52c1a
      hqzizania authored
      Author: hqzizania <qian.huang@intel.com>
      
      Closes #6190 from hqzizania/R and squashes the following commits:
      
      1641f9e [hqzizania] fixes and add test units
      bb3411a [hqzizania] Convert NAs to null type in SparkR DataFrames
      a5c52c1a
    • Xiangrui Meng's avatar
      [SPARK-8168] [MLLIB] Add Python friendly constructor to PipelineModel · 82870d50
      Xiangrui Meng authored
      This makes the constructor callable in Python. dbtsai
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6709 from mengxr/SPARK-8168 and squashes the following commits:
      
      f871de4 [Xiangrui Meng] Add Python friendly constructor to PipelineModel
      82870d50
    • Andrew Or's avatar
      [SPARK-8162] [HOTFIX] Fix NPE in spark-shell · f3eec92c
      Andrew Or authored
      This was caused by this commit: f2713478
      
      This patch does not attempt to fix the root cause of why the `VisibleForTesting` annotation causes a NPE in the shell. We should find a way to fix that separately.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #6711 from andrewor14/fix-spark-shell and squashes the following commits:
      
      bf62ecc [Andrew Or] Prevent NPE in spark-shell
      f3eec92c
    • Reynold Xin's avatar
      [SPARK-8148] Do not use FloatType in partition column inference. · 51853891
      Reynold Xin authored
      Use DoubleType instead to be more stable and robust.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6692 from rxin/SPARK-8148 and squashes the following commits:
      
      6742ecc [Reynold Xin] [SPARK-8148] Do not use FloatType in partition column inference.
      51853891
    • Wenchen Fan's avatar
      [SQL][minor] remove duplicated cases in `DecimalPrecision` · fe7669d3
      Wenchen Fan authored
      We already have a rule to do type coercion for fixed decimal and unlimited decimal in `WidenTypes`, so we don't need to handle them in `DecimalPrecision`.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #6698 from cloud-fan/fix and squashes the following commits:
      
      413ad4a [Wenchen Fan] remove duplicated cases
      fe7669d3
    • Cheng Lian's avatar
      [SPARK-8121] [SQL] Fixes InsertIntoHadoopFsRelation job initialization for Hadoop 1.x · bbdfc0a4
      Cheng Lian authored
      For Hadoop 1.x, `TaskAttemptContext` constructor clones the `Configuration` argument, thus configurations done in `HadoopFsRelation.prepareForWriteJob()` are not populated to *driver* side `TaskAttemptContext` (executor side configurations are properly populated). Currently this should only affect Parquet output committer class configuration.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6669 from liancheng/spark-8121 and squashes the following commits:
      
      73819e8 [Cheng Lian] Minor logging fix
      fce089c [Cheng Lian] Adds more logging
      b6f78a6 [Cheng Lian] Fixes compilation error introduced while rebasing
      963a1aa [Cheng Lian] Addresses @yhuai's comment
      c3a0b1a [Cheng Lian] Fixes InsertIntoHadoopFsRelation job initialization
      bbdfc0a4
    • Daoyuan Wang's avatar
      [SPARK-8158] [SQL] several fix for HiveShim · ed5c2dcc
      Daoyuan Wang authored
      1. explicitly import implicit conversion support.
      2. use .nonEmpty instead of .size > 0
      3. use val instead of var
      4. comment indention
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #6700 from adrian-wang/shimsimprove and squashes the following commits:
      
      d22e108 [Daoyuan Wang] several fix for HiveShim
      ed5c2dcc
    • Daoyuan Wang's avatar
      [MINOR] change new Exception to IllegalArgumentException · 49f19b95
      Daoyuan Wang authored
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #6434 from adrian-wang/joinerr and squashes the following commits:
      
      ee1b64f [Daoyuan Wang] break line
      f7c53e9 [Daoyuan Wang] to IllegalArgumentException
      f8dea2d [Daoyuan Wang] sys.err to IllegalStateException
      be82259 [Daoyuan Wang] change new exception to sys.err
      49f19b95
    • Mingfei's avatar
      [SMALL FIX] Return null if catch EOFException · 149d1b28
      Mingfei authored
      Return null if catch EOFException, just like function "asKeyValueIterator" in this class
      
      Author: Mingfei <mingfei.shi@intel.com>
      
      Closes #6703 from shimingfei/returnNull and squashes the following commits:
      
      205deec [Mingfei] return null if catch EOFException
      149d1b28
    • MechCoder's avatar
      [SPARK-8140] [MLLIB] Remove empty model check in StreamingLinearAlgorithm · e3e9c703
      MechCoder authored
      1. Prevent creating a map of data to find numFeatures
      2. If model is empty, then initialize with a zero vector of numFeature
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #6684 from MechCoder/spark-8140 and squashes the following commits:
      
      7fbf5f9 [MechCoder] [SPARK-8140] Remove empty model check in StreamingLinearAlgorithm And other minor cosmits
      e3e9c703
    • Marcelo Vanzin's avatar
      [SPARK-8126] [BUILD] Use custom temp directory during build. · a1d9e5cc
      Marcelo Vanzin authored
      Even with all the efforts to cleanup the temp directories created by
      unit tests, Spark leaves a lot of garbage in /tmp after a test run.
      This change overrides java.io.tmpdir to place those files under the
      build directory instead.
      
      After an sbt full unit test run, I was left with > 400 MB of temp
      files. Since they're now under the build dir, it's much easier to
      clean them up.
      
      Also make a slight change to a unit test to make it not pollute the
      source directory with test data.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #6674 from vanzin/SPARK-8126 and squashes the following commits:
      
      0f8ad41 [Marcelo Vanzin] Make sure tmp dir exists when tests run.
      643e916 [Marcelo Vanzin] [MINOR] [BUILD] Use custom temp directory during build.
      a1d9e5cc
    • Liang-Chi Hsieh's avatar
      [SPARK-7939] [SQL] Add conf to enable/disable partition column type inference · 03ef6be9
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-7939
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #6503 from viirya/disable_partition_type_inference and squashes the following commits:
      
      3e90470 [Liang-Chi Hsieh] Default to enable type inference and update docs.
      455edb1 [Liang-Chi Hsieh] Merge remote-tracking branch 'upstream/master' into disable_partition_type_inference
      9a57933 [Liang-Chi Hsieh] Add conf to enable/disable partition column type inference.
      03ef6be9
    • linweizhong's avatar
      [SPARK-7705] [YARN] Cleanup of .sparkStaging directory fails if application is killed · eacd4a92
      linweizhong authored
      As I have tested, if we cancel or kill the app then the final status may be undefined, killed or succeeded, so clean up staging directory when appMaster exit at any final application status.
      
      Author: linweizhong <linweizhong@huawei.com>
      
      Closes #6409 from Sephiroth-Lin/SPARK-7705 and squashes the following commits:
      
      3a5a0a5 [linweizhong] Update
      83dc274 [linweizhong] Update
      923d44d [linweizhong] Update
      0dd7c2d [linweizhong] Update
      b76a102 [linweizhong] Update code style
      7846b69 [linweizhong] Update
      bd6cf0d [linweizhong] Refactor
      aed9f18 [linweizhong] Clean up stagingDir when launch app on yarn
      95595c3 [linweizhong] Cleanup of .sparkStaging directory when AppMaster exit at any final application status
      eacd4a92
    • Daoyuan Wang's avatar
      [SPARK-4761] [DOC] [SQL] kryo default setting in SQL Thrift server · 10fc2f6f
      Daoyuan Wang authored
      this is a follow up of #3621
      
      /cc liancheng pwendell
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #6639 from adrian-wang/kryodoc and squashes the following commits:
      
      3c4b1cf [Daoyuan Wang] [DOC] kryo default setting in SQL Thrift server
      10fc2f6f
    • Reynold Xin's avatar
      [SPARK-8154][SQL] Remove Term/Code type aliases in code generation. · 72ba0fc4
      Reynold Xin authored
      From my perspective as a code reviewer, I find them more confusing than using String directly.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6694 from rxin/SPARK-8154 and squashes the following commits:
      
      4e5056c [Reynold Xin] [SPARK-8154][SQL] Remove Term/Code type aliases in code generation.
      72ba0fc4
  4. Jun 07, 2015
    • Reynold Xin's avatar
      [SPARK-8149][SQL] Break ExpressionEvaluationSuite down to multiple files · f74be744
      Reynold Xin authored
      Also moved a few files in expressions package around to match test suites.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6693 from rxin/expr-refactoring and squashes the following commits:
      
      857599f [Reynold Xin] Fixed style violation.
      c0eb74b [Reynold Xin] Fixed compilation.
      b3a40f8 [Reynold Xin] Refactored expression test suites.
      f74be744
    • Davies Liu's avatar
      [SPARK-8117] [SQL] Push codegen implementation into each Expression · 5e7b6b67
      Davies Liu authored
      This PR move codegen implementation of expressions into Expression class itself, make it easy to manage.
      
      It introduces two APIs in Expression:
      ```
      def gen(ctx: CodeGenContext): GeneratedExpressionCode
      def genCode(ctx: CodeGenContext, ev: GeneratedExpressionCode): Code
      ```
      
      gen(ctx) will call genSource(ctx, ev) to generate Java source code for the current expression. A expression needs to override genSource().
      
      Here are the types:
      ```
      type Term String
      type Code String
      
      /**
       * Java source for evaluating an [[Expression]] given a [[Row]] of input.
       */
      case class GeneratedExpressionCode(var code: Code,
                                     nullTerm: Term,
                                     primitiveTerm: Term,
                                     objectTerm: Term)
      /**
       * A context for codegen, which is used to bookkeeping the expressions those are not supported
       * by codegen, then they are evaluated directly. The unsupported expression is appended at the
       * end of `references`, the position of it is kept in the code, used to access and evaluate it.
       */
      class CodeGenContext {
        /**
         * Holding all the expressions those do not support codegen, will be evaluated directly.
         */
        val references: Seq[Expression] = new mutable.ArrayBuffer[Expression]()
      }
      ```
      
      This is basically #6660, but fixed style violation and compilation failure.
      
      Author: Davies Liu <davies@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6690 from rxin/codegen and squashes the following commits:
      
      e1368c2 [Reynold Xin] Fixed tests.
      73db80e [Reynold Xin] Fixed compilation failure.
      19d6435 [Reynold Xin] Fixed style violation.
      9adaeaf [Davies Liu] address comments
      f42c732 [Davies Liu] improve coverage and tests
      bad6828 [Davies Liu] address comments
      e03edaa [Davies Liu] consts fold
      86fac2c [Davies Liu] fix style
      02262c9 [Davies Liu] address comments
      b5d3617 [Davies Liu] Merge pull request #5 from rxin/codegen
      48c454f [Reynold Xin] Some code gen update.
      2344bc0 [Davies Liu] fix test
      12ff88a [Davies Liu] fix build
      c5fb514 [Davies Liu] rename
      8c6d82d [Davies Liu] update docs
      b145047 [Davies Liu] fix style
      e57959d [Davies Liu] add type alias
      3ff25f8 [Davies Liu] refactor
      593d617 [Davies Liu] pushing codegen into Expression
      5e7b6b67
    • cody koeninger's avatar
      [SPARK-2808] [STREAMING] [KAFKA] cleanup tests from · b127ff8a
      cody koeninger authored
      see if requiring producer acks eliminates the need for waitUntilLeaderOffset calls in tests
      
      Author: cody koeninger <cody@koeninger.org>
      
      Closes #5921 from koeninger/kafka-0.8.2-test-cleanup and squashes the following commits:
      
      1e89dc8 [cody koeninger] Merge branch 'master' into kafka-0.8.2-test-cleanup
      4662828 [cody koeninger] [Streaming][Kafka] filter mima issue for removal of method from private test class
      af1e083 [cody koeninger] Merge branch 'master' into kafka-0.8.2-test-cleanup
      4298ac2 [cody koeninger] [Streaming][Kafka] update comment to trigger jenkins attempt
      1274afb [cody koeninger] [Streaming][Kafka] see if requiring producer acks eliminates the need for waitUntilLeaderOffset calls in tests
      b127ff8a
    • Sean Owen's avatar
      [SPARK-7733] [CORE] [BUILD] Update build, code to use Java 7 for 1.5.0+ · e84815dc
      Sean Owen authored
      Update build to use Java 7, and remove some comments and special-case support for Java 6.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #6265 from srowen/SPARK-7733 and squashes the following commits:
      
      59bda4e [Sean Owen] Update build to use Java 7, and remove some comments and special-case support for Java 6
      e84815dc
Loading