Skip to content
Snippets Groups Projects
  1. Aug 04, 2017
  2. Aug 03, 2017
    • Christiam Camacho's avatar
      Fix Java SimpleApp spark application · dd72b10a
      Christiam Camacho authored
      ## What changes were proposed in this pull request?
      
      Add missing import and missing parentheses to invoke `SparkSession::text()`.
      
      ## How was this patch tested?
      
      Built and the code for this application, ran jekyll locally per docs/README.md.
      
      Author: Christiam Camacho <camacho@ncbi.nlm.nih.gov>
      
      Closes #18795 from christiam/master.
      dd72b10a
    • louis lyu's avatar
      [SPARK-20713][SPARK CORE] Convert CommitDenied to TaskKilled. · bb7afb4e
      louis lyu authored
      ## What changes were proposed in this pull request?
      
      In executor, toTaskFailedReason is converted to toTaskCommitDeniedReason to avoid the inconsistency of taskState. In JobProgressListener, add case TaskCommitDenied so that now the stage killed number is been incremented other than failed number.
      This pull request is picked up from: https://github.com/apache/spark/pull/18070 using commit: ff93ade0248baf3793ab55659042f9d7b8efbdef
      The case match for TaskCommitDenied is added incrementing the correct num of killed after pull/18070.
      
      ## How was this patch tested?
      
      Run a normal speculative job and check the Stage UI page, should have no failed displayed.
      
      Author: louis lyu <llyu@c02tk24rg8wl-lm.champ.corp.yahoo.com>
      
      Closes #18819 from nlyu/SPARK-20713.
      bb7afb4e
    • Dilip Biswal's avatar
      [SPARK-21599][SQL] Collecting column statistics for datasource tables may fail... · 13785daa
      Dilip Biswal authored
      [SPARK-21599][SQL] Collecting column statistics for datasource tables may fail with java.util.NoSuchElementException
      
      ## What changes were proposed in this pull request?
      In case of datasource tables (when they are stored in non-hive compatible way) , the schema information is recorded as table properties in hive meta-store. The alterTableStats method needs to get the schema information from table properties for data source tables before recording the column level statistics. Currently, we don't get the correct schema information and fail with java.util.NoSuchElement exception.
      
      ## How was this patch tested?
      A new test case is added in StatisticsSuite.
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      
      Closes #18804 from dilipbiswal/datasource_stats.
      13785daa
    • hyukjinkwon's avatar
      [SPARK-21602][R] Add map_keys and map_values functions to R · 97ba4918
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR adds `map_values` and `map_keys` to R API.
      
      ```r
      > df <- createDataFrame(cbind(model = rownames(mtcars), mtcars))
      > tmp <- mutate(df, v = create_map(df$model, df$cyl))
      > head(select(tmp, map_keys(tmp$v)))
      ```
      ```
              map_keys(v)
      1         Mazda RX4
      2     Mazda RX4 Wag
      3        Datsun 710
      4    Hornet 4 Drive
      5 Hornet Sportabout
      6           Valiant
      ```
      ```r
      > head(select(tmp, map_values(tmp$v)))
      ```
      ```
        map_values(v)
      1             6
      2             6
      3             4
      4             6
      5             8
      6             6
      ```
      
      ## How was this patch tested?
      
      Manual tests and unit tests in `R/pkg/tests/fulltests/test_sparkSQL.R`
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18809 from HyukjinKwon/map-keys-values-r.
      97ba4918
    • Chang chen's avatar
      [SPARK-21605][BUILD] Let IntelliJ IDEA correctly detect Language level and Target byte code version · e7c59b41
      Chang chen authored
      With SPARK-21592, removing source and target properties from maven-compiler-plugin lets IntelliJ IDEA use default Language level and Target byte code version which are 1.4.
      
      This change adds source, target and encoding properties back to fix this issue.  As I test, it doesn't increase compile time.
      
      Author: Chang chen <baibaichen@gmail.com>
      
      Closes #18808 from baibaichen/feature/idea-fix.
      e7c59b41
    • zuotingbing's avatar
      [SPARK-21611][SQL] Error class name for log in several classes. · 32214706
      zuotingbing authored
      ## What changes were proposed in this pull request?
      
      Error class name for log in several classes. such as:
      `2017-08-02 16:43:37,695 INFO CompositeService: Operation log root directory is created: /tmp/mr/operation_logs`
      `Operation log root directory is created ... ` is in `SessionManager.java` actually.
      
      ## How was this patch tested?
      
      manual tests
      
      Author: zuotingbing <zuo.tingbing9@zte.com.cn>
      
      Closes #18816 from zuotingbing/SPARK-21611.
      32214706
    • zuotingbing's avatar
      [SPARK-21604][SQL] if the object extends Logging, i suggest to remove the var LOG which is useless. · f13dbb3a
      zuotingbing authored
      ## What changes were proposed in this pull request?
      
      if the object extends Logging, i suggest to remove the var LOG which is useless.
      
      ## How was this patch tested?
      
      Exist tests
      
      Author: zuotingbing <zuo.tingbing9@zte.com.cn>
      
      Closes #18811 from zuotingbing/SPARK-21604.
      f13dbb3a
    • Ayush Singh's avatar
      [SPARK-21615][ML][MLLIB][DOCS] Fix broken redirect in collaborative filtering... · 7c206dd3
      Ayush Singh authored
      [SPARK-21615][ML][MLLIB][DOCS] Fix broken redirect in collaborative filtering docs to databricks training repo
      
      ## What changes were proposed in this pull request?
      * Current [MLlib Collaborative Filtering tutorial](https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html) points to broken links to old databricks website.
      * Databricks moved all their content to [git repo](https://github.com/databricks/spark-training)
      * Two links needs to be fixed,
        * [training exercises](https://databricks-training.s3.amazonaws.com/index.html)
        * [personalized movie recommendation with spark.mllib](https://databricks-training.s3.amazonaws.com/movie-recommendation-with-mllib.html)
      
      ## How was this patch tested?
      Generated docs locally
      
      Author: Ayush Singh <singhay@ccs.neu.edu>
      
      Closes #18821 from singhay/SPARK-21615.
      7c206dd3
  3. Aug 02, 2017
    • Shixiong Zhu's avatar
      [SPARK-21546][SS] dropDuplicates should ignore watermark when it's not a key · 0d26b3aa
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      When the watermark is not a column of `dropDuplicates`, right now it will crash. This PR fixed this issue.
      
      ## How was this patch tested?
      
      The new unit test.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #18822 from zsxwing/SPARK-21546.
      0d26b3aa
    • Marcelo Vanzin's avatar
      [SPARK-21490][CORE] Make sure SparkLauncher redirects needed streams. · 9456176d
      Marcelo Vanzin authored
      The code was failing to account for some cases when setting up log
      redirection. For example, if a user redirected only stdout to a file,
      the launcher code would leave stderr without redirection, which could
      lead to child processes getting stuck because stderr wasn't being
      read.
      
      So detect cases where only one of the streams is redirected, and
      redirect the other stream to the log as appropriate.
      
      For the old "launch()" API, redirection of the unconfigured stream
      only happens if the user has explicitly requested for log redirection.
      Log redirection is on by default with "startApplication()".
      
      Most of the change is actually adding new unit tests to make sure the
      different cases work as expected. As part of that, I moved some tests
      that were in the core/ module to the launcher/ module instead, since
      they don't depend on spark-submit.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #18696 from vanzin/SPARK-21490.
      9456176d
    • Shixiong Zhu's avatar
      [SPARK-21597][SS] Fix a potential overflow issue in EventTimeStats · 7f63e85b
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR fixed a potential overflow issue in EventTimeStats.
      
      ## How was this patch tested?
      
      The new unit tests
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #18803 from zsxwing/avg.
      7f63e85b
    • zero323's avatar
      [SPARK-20601][ML] Python API for Constrained Logistic Regression · 845c039c
      zero323 authored
      ## What changes were proposed in this pull request?
      Python API for Constrained Logistic Regression based on #17922 , thanks for the original contribution from zero323 .
      
      ## How was this patch tested?
      Unit tests.
      
      Author: zero323 <zero323@users.noreply.github.com>
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #18759 from yanboliang/SPARK-20601.
      845c039c
  4. Aug 01, 2017
    • Dongjoon Hyun's avatar
      [SPARK-21578][CORE] Add JavaSparkContextSuite · 14e75758
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Due to SI-8479, [SPARK-1093](https://issues.apache.org/jira/browse/SPARK-21578) introduced redundant [SparkContext constructors](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L148-L181). However, [SI-8479](https://issues.scala-lang.org/browse/SI-8479) is already fixed in Scala 2.10.5 and Scala 2.11.1.
      
      The real reason to provide this constructor is that Java code can access `SparkContext` directly. It's Scala behavior, SI-4278. So, this PR adds an explicit testsuite, `JavaSparkContextSuite`  to prevent future regression, and fixes the outdate comment, too.
      
      ## How was this patch tested?
      
      Pass the Jenkins with a new test suite.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #18778 from dongjoon-hyun/SPARK-21578.
      14e75758
    • gatorsmile's avatar
      [CORE][MINOR] Improve the error message of checkpoint RDD verification · 4cc704b1
      gatorsmile authored
      ### What changes were proposed in this pull request?
      The original error message is pretty confusing. It is unable to tell which number is `number of partitions` and which one is the `RDD ID`. This PR is to improve the checkpoint checking.
      
      ### How was this patch tested?
      N/A
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #18796 from gatorsmile/improveErrMsgForCheckpoint.
      4cc704b1
    • Bryan Cutler's avatar
      [SPARK-12717][PYTHON] Adding thread-safe broadcast pickle registry · 77cc0d67
      Bryan Cutler authored
      ## What changes were proposed in this pull request?
      
      When using PySpark broadcast variables in a multi-threaded environment,  `SparkContext._pickled_broadcast_vars` becomes a shared resource.  A race condition can occur when broadcast variables that are pickled from one thread get added to the shared ` _pickled_broadcast_vars` and become part of the python command from another thread.  This PR introduces a thread-safe pickled registry using thread local storage so that when python command is pickled (causing the broadcast variable to be pickled and added to the registry) each thread will have their own view of the pickle registry to retrieve and clear the broadcast variables used.
      
      ## How was this patch tested?
      
      Added a unit test that causes this race condition using another thread.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #18695 from BryanCutler/pyspark-bcast-threadsafe-SPARK-12717.
      77cc0d67
    • Devaraj K's avatar
      [SPARK-21339][CORE] spark-shell --packages option does not add jars to classpath on windows · 58da1a24
      Devaraj K authored
      The --packages option jars are getting added to the classpath with the scheme as "file:///", in Unix it doesn't have problem with this since the scheme contains the Unix Path separator which separates the jar name with location in the classpath. In Windows, the jar file is not getting resolved from the classpath because of the scheme.
      
      Windows : file:///C:/Users/<user>/.ivy2/jars/<jar-name>.jar
      Unix : file:///home/<user>/.ivy2/jars/<jar-name>.jar
      
      With this PR, we are avoiding the 'file://' scheme to get added to the packages jar files.
      
      I have verified manually in Windows and Unix environments, with the change it adds the jar to classpath like below,
      
      Windows : C:\Users\<user>\.ivy2\jars\<jar-name>.jar
      Unix : /home/<user>/.ivy2/jars/<jar-name>.jar
      
      Author: Devaraj K <devaraj@apache.org>
      
      Closes #18708 from devaraj-kavali/SPARK-21339.
      58da1a24
    • Sean Owen's avatar
      [SPARK-21593][DOCS] Fix 2 rendering errors on configuration page · b1d59e60
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Fix 2 rendering errors on configuration doc page, due to SPARK-21243 and SPARK-15355.
      
      ## How was this patch tested?
      
      Manually built and viewed docs with jekyll
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #18793 from srowen/SPARK-21593.
      b1d59e60
    • Grzegorz Slowikowski's avatar
      [SPARK-21592][BUILD] Skip maven-compiler-plugin main and test compilations in Maven build · 74cda94c
      Grzegorz Slowikowski authored
      `scala-maven-plugin` in `incremental` mode compiles `Scala` and `Java` classes. There is no need to execute `maven-compiler-plugin` goals to compile (in fact recompile) `Java`.
      
      This change reduces compilation time (over 10% on my machine).
      
      Author: Grzegorz Slowikowski <gslowikowski@gmail.com>
      
      Closes #18750 from gslowikowski/remove-redundant-compilation-from-maven.
      74cda94c
    • Marcelo Vanzin's avatar
      [SPARK-20079][YARN] Fix client AM not allocating executors after restart. · 6735433c
      Marcelo Vanzin authored
      The main goal of this change is to avoid the situation described
      in the bug, where an AM restart in the middle of a job may cause
      no new executors to be allocated because of faulty logic in the
      reset path.
      
      The change does two things:
      
      - fixes the executor alloc manager's reset() so that it does not
        stop allocation after a reset() in the middle of a job
      - re-orders the initialization of the YarnAllocator class so that
        it fetches the current executor ID before triggering the reset()
        above.
      
      This ensures both that the new allocator gets new requests for executors,
      and that it starts from the correct executor id.
      
      Tested with unit tests and by manually causing AM restarts while
      running jobs using spark-shell in YARN mode.
      
      Closes #17882
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      Author: Guoqiang Li <witgo@qq.com>
      
      Closes #18663 from vanzin/SPARK-20079.
      6735433c
    • Marcelo Vanzin's avatar
      [SPARK-21522][CORE] Fix flakiness in LauncherServerSuite. · b1335018
      Marcelo Vanzin authored
      Handle the case where the server closes the socket before the full message
      has been written by the client.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #18727 from vanzin/SPARK-21522.
      b1335018
    • pgandhi's avatar
      [SPARK-21585] Application Master marking application status as Failed for Client Mode · 97ccc63f
      pgandhi authored
      The fix deployed for SPARK-21541 resulted in the Application Master to set the final status of a spark application as Failed for the client mode as the flag 'registered' was not being set to true for client mode. So, in order to fix the issue, I have set the flag 'registered' as true in client mode on successfully registering Application Master.
      
      ## How was this patch tested?
      Tested the patch manually.
      
      Before:
      <img width="1275" alt="screen shot-before2" src="https://user-images.githubusercontent.com/22228190/28799641-02b5ed78-760f-11e7-9eb0-bf8407dad0ad.png">
      
      After:
      <img width="1221" alt="screen shot-after2" src="https://user-images.githubusercontent.com/22228190/28799646-0ac9ef14-760f-11e7-8bf5-9dfd743d0f2f.png">
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: pgandhi <pgandhi@yahoo-inc.com>
      Author: pgandhi999 <parthkgandhi9@gmail.com>
      
      Closes #18788 from pgandhi999/SPARK-21585.
      97ccc63f
    • Zheng RuiFeng's avatar
      [SPARK-21388][ML][PYSPARK] GBTs inherit from HasStepSize & LInearSVC from HasThreshold · 253a07e4
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      GBTs inherit from HasStepSize & LInearSVC/Binarizer from HasThreshold
      
      ## How was this patch tested?
      existing tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      Author: Ruifeng Zheng <ruifengz@foxmail.com>
      
      Closes #18612 from zhengruifeng/override_HasXXX.
      253a07e4
    • jerryshao's avatar
      [SPARK-21475][CORE] Use NIO's Files API to replace... · 5fd0294f
      jerryshao authored
      [SPARK-21475][CORE] Use NIO's Files API to replace FileInputStream/FileOutputStream in some critical paths
      
      ## What changes were proposed in this pull request?
      
      Java's `FileInputStream` and `FileOutputStream` overrides finalize(), even this file input/output stream is closed correctly and promptly, it will still leave some memory footprints which will only get cleaned in Full GC. This will introduce two side effects:
      
      1. Lots of memory footprints regarding to Finalizer will be kept in memory and this will increase the memory overhead. In our use case of external shuffle service, a busy shuffle service will have bunch of this object and potentially lead to OOM.
      2. The Finalizer will only be called in Full GC, and this will increase the overhead of Full GC and lead to long GC pause.
      
      https://bugs.openjdk.java.net/browse/JDK-8080225
      
      https://www.cloudbees.com/blog/fileinputstream-fileoutputstream-considered-harmful
      
      So to fix this potential issue, here propose to use NIO's Files#newInput/OutputStream instead in some critical paths like shuffle.
      
      Left unchanged FileInputStream in core which I think is not so critical:
      
      ```
      ./core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:467:    val file = new DataInputStream(new FileInputStream(filename))
      ./core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:942:    val in = new FileInputStream(new File(path))
      ./core/src/main/scala/org/apache/spark/deploy/master/FileSystemPersistenceEngine.scala:76:    val fileIn = new FileInputStream(file)
      ./core/src/main/scala/org/apache/spark/deploy/RPackageUtils.scala:248:        val fis = new FileInputStream(file)
      ./core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:910:                input = new FileInputStream(new File(t))
      ./core/src/main/scala/org/apache/spark/metrics/MetricsConfig.scala:20:import java.io.{FileInputStream, InputStream}
      ./core/src/main/scala/org/apache/spark/metrics/MetricsConfig.scala:132:        case Some(f) => new FileInputStream(f)
      ./core/src/main/scala/org/apache/spark/scheduler/SchedulableBuilder.scala:20:import java.io.{FileInputStream, InputStream}
      ./core/src/main/scala/org/apache/spark/scheduler/SchedulableBuilder.scala:77:        val fis = new FileInputStream(f)
      ./core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala:27:import org.apache.spark.io.NioBufferedFileInputStream
      ./core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala:94:      new DataInputStream(new NioBufferedFileInputStream(index))
      ./core/src/main/scala/org/apache/spark/storage/DiskStore.scala:111:        val channel = new FileInputStream(file).getChannel()
      ./core/src/main/scala/org/apache/spark/storage/DiskStore.scala:219:    val channel = new FileInputStream(file).getChannel()
      ./core/src/main/scala/org/apache/spark/TestUtils.scala:20:import java.io.{ByteArrayInputStream, File, FileInputStream, FileOutputStream}
      ./core/src/main/scala/org/apache/spark/TestUtils.scala:106:      val in = new FileInputStream(file)
      ./core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala:89:        inputStream = new FileInputStream(activeFile)
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:329:      if (in.isInstanceOf[FileInputStream] && out.isInstanceOf[FileOutputStream]
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:332:        val inChannel = in.asInstanceOf[FileInputStream].getChannel()
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:1533:      gzInputStream = new GZIPInputStream(new FileInputStream(file))
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:1560:      new GZIPInputStream(new FileInputStream(file))
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:1562:      new FileInputStream(file)
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:2090:    val inReader = new InputStreamReader(new FileInputStream(file), StandardCharsets.UTF_8)
      ```
      
      Left unchanged FileOutputStream in core:
      
      ```
      ./core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:957:    val out = new FileOutputStream(file)
      ./core/src/main/scala/org/apache/spark/api/r/RBackend.scala:20:import java.io.{DataOutputStream, File, FileOutputStream, IOException}
      ./core/src/main/scala/org/apache/spark/api/r/RBackend.scala:131:      val dos = new DataOutputStream(new FileOutputStream(f))
      ./core/src/main/scala/org/apache/spark/deploy/master/FileSystemPersistenceEngine.scala:62:    val fileOut = new FileOutputStream(file)
      ./core/src/main/scala/org/apache/spark/deploy/RPackageUtils.scala:160:          val outStream = new FileOutputStream(outPath)
      ./core/src/main/scala/org/apache/spark/deploy/RPackageUtils.scala:239:    val zipOutputStream = new ZipOutputStream(new FileOutputStream(zipFile, false))
      ./core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:949:        val out = new FileOutputStream(tempFile)
      ./core/src/main/scala/org/apache/spark/deploy/worker/CommandUtils.scala:20:import java.io.{File, FileOutputStream, InputStream, IOException}
      ./core/src/main/scala/org/apache/spark/deploy/worker/CommandUtils.scala:106:    val out = new FileOutputStream(file, true)
      ./core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:109:     * Therefore, for local files, use FileOutputStream instead. */
      ./core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:112:        new FileOutputStream(uri.getPath)
      ./core/src/main/scala/org/apache/spark/storage/DiskBlockObjectWriter.scala:20:import java.io.{BufferedOutputStream, File, FileOutputStream, OutputStream}
      ./core/src/main/scala/org/apache/spark/storage/DiskBlockObjectWriter.scala:71:  private var fos: FileOutputStream = null
      ./core/src/main/scala/org/apache/spark/storage/DiskBlockObjectWriter.scala:102:    fos = new FileOutputStream(file, true)
      ./core/src/main/scala/org/apache/spark/storage/DiskBlockObjectWriter.scala:213:      var truncateStream: FileOutputStream = null
      ./core/src/main/scala/org/apache/spark/storage/DiskBlockObjectWriter.scala:215:        truncateStream = new FileOutputStream(file, true)
      ./core/src/main/scala/org/apache/spark/storage/DiskStore.scala:153:    val out = new FileOutputStream(file).getChannel()
      ./core/src/main/scala/org/apache/spark/TestUtils.scala:20:import java.io.{ByteArrayInputStream, File, FileInputStream, FileOutputStream}
      ./core/src/main/scala/org/apache/spark/TestUtils.scala:81:    val jarStream = new JarOutputStream(new FileOutputStream(jarFile))
      ./core/src/main/scala/org/apache/spark/TestUtils.scala:96:    val jarFileStream = new FileOutputStream(jarFile)
      ./core/src/main/scala/org/apache/spark/util/logging/FileAppender.scala:20:import java.io.{File, FileOutputStream, InputStream, IOException}
      ./core/src/main/scala/org/apache/spark/util/logging/FileAppender.scala:31:  volatile private var outputStream: FileOutputStream = null
      ./core/src/main/scala/org/apache/spark/util/logging/FileAppender.scala:97:    outputStream = new FileOutputStream(file, true)
      ./core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala:90:        gzOutputStream = new GZIPOutputStream(new FileOutputStream(gzFile))
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:329:      if (in.isInstanceOf[FileInputStream] && out.isInstanceOf[FileOutputStream]
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:333:        val outChannel = out.asInstanceOf[FileOutputStream].getChannel()
      ./core/src/main/scala/org/apache/spark/util/Utils.scala:527:      val out = new FileOutputStream(tempFile)
      ```
      
      Here in `DiskBlockObjectWriter`, it uses `FileDescriptor` so it is not easy to change to NIO Files API.
      
      For the `FileInputStream` and `FileOutputStream` in common/shuffle* I changed them all.
      
      ## How was this patch tested?
      
      Existing tests and manual verification.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #18684 from jerryshao/SPARK-21475.
      5fd0294f
    • Takeshi Yamamuro's avatar
      [SPARK-21589][SQL][DOC] Add documents about Hive UDF/UDTF/UDAF · 110695db
      Takeshi Yamamuro authored
      ## What changes were proposed in this pull request?
      This pr added documents about unsupported functions in Hive UDF/UDTF/UDAF.
      This pr relates to #18768 and #18527.
      
      ## How was this patch tested?
      N/A
      
      Author: Takeshi Yamamuro <yamamuro@apache.org>
      
      Closes #18792 from maropu/HOTFIX-20170731.
      110695db
  5. Jul 31, 2017
    • wangmiao1981's avatar
      [SPARK-21381][SPARKR] SparkR: pass on setHandleInvalid for classification algorithms · 9570e81a
      wangmiao1981 authored
      ## What changes were proposed in this pull request?
      
      SPARK-20307 Added handleInvalid option to RFormula for tree-based classification algorithms. We should add this parameter for other classification algorithms in SparkR.
      
      This is a followup PR for SPARK-20307.
      
      ## How was this patch tested?
      
      New Unit tests are added.
      
      Author: wangmiao1981 <wm624@hotmail.com>
      
      Closes #18605 from wangmiao1981/class.
      9570e81a
    • bravo-zhang's avatar
      [SPARK-18950][SQL] Report conflicting fields when merging two StructTypes · 6b186c9d
      bravo-zhang authored
      ## What changes were proposed in this pull request?
      
      Currently, StructType.merge() only reports data types of conflicting fields when merging two incompatible schemas. It would be nice to also report the field names for easier debugging.
      
      ## How was this patch tested?
      
      Unit test in DataTypeSuite.
      Print exception message when conflict is triggered.
      
      Author: bravo-zhang <mzhang1230@gmail.com>
      
      Closes #16365 from bravo-zhang/spark-18950.
      6b186c9d
  6. Jul 30, 2017
    • iurii.ant's avatar
      [SPARK-21575][SPARKR] Eliminate needless synchronization in java-R serialization · 106eaa9b
      iurii.ant authored
      ## What changes were proposed in this pull request?
      Remove surplus synchronized blocks.
      
      ## How was this patch tested?
      Unit tests run OK.
      
      Author: iurii.ant <sereneant@gmail.com>
      
      Closes #18775 from SereneAnt/eliminate_unnecessary_synchronization_in_java-R_serialization.
      106eaa9b
    • Zhan Zhang's avatar
      [SPARK-19839][CORE] release longArray in BytesToBytesMap · 44e501ac
      Zhan Zhang authored
      ## What changes were proposed in this pull request?
      When BytesToBytesMap spills, its longArray should be released. Otherwise, it may not released until the task complete. This array may take a significant amount of memory, which cannot be used by later operator, such as UnsafeShuffleExternalSorter, resulting in more frequent spill in sorter. This patch release the array as destructive iterator will not use this array anymore.
      
      ## How was this patch tested?
      Manual test in production
      
      Author: Zhan Zhang <zhanzhang@fb.com>
      
      Closes #17180 from zhzhan/memory.
      44e501ac
    • hyukjinkwon's avatar
      [MINOR] Minor comment fixes in merge_spark_pr.py script · f1a798b5
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to fix few rather typos in `merge_spark_pr.py`.
      
      - `#   usage: ./apache-pr-merge.py    (see config env vars below)`
        -> `#   usage: ./merge_spark_pr.py    (see config env vars below)`
      
      - `... have local a Spark ...` -> `... have a local Spark ...`
      
      - `... to Apache.` -> `... to Apache Spark.`
      
      I skimmed this file and these look all I could find.
      
      ## How was this patch tested?
      
      pep8 check (`./dev/lint-python`).
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18776 from HyukjinKwon/minor-merge-script.
      f1a798b5
    • Cheng Wang's avatar
      [MINOR][DOC] Replace numTasks with numPartitions in programming guide · 6830e90d
      Cheng Wang authored
      In programming guide, `numTasks` is used in several places as arguments of Transformations. However, in code, `numPartitions` is used. In this fix, I replace `numTasks` with `numPartitions` in programming guide for consistency.
      
      Author: Cheng Wang <chengwang0511@gmail.com>
      
      Closes #18774 from polarke/replace-numtasks-with-numpartitions-in-doc.
      6830e90d
    • guoxiaolong's avatar
      [SPARK-21297][WEB-UI] Add count in 'JDBC/ODBC Server' page. · d79816dd
      guoxiaolong authored
      ## What changes were proposed in this pull request?
      
      1.add count about 'Session Statistics' and 'SQL Statistics' in 'JDBC/ODBC Server' page.The purpose is to know the statistics clearly.
      
      fix before:
      ![1](https://user-images.githubusercontent.com/26266482/27819373-7fbe4002-60cc-11e7-9e7f-e9cc6f9ef746.png)
      
      fix after:
      ![1](https://user-images.githubusercontent.com/26266482/28700157-876cb7d6-7380-11e7-869c-0a4f18d65357.png)
      
      ## How was this patch tested?
      
      manual tests
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: guoxiaolong <guo.xiaolong1@zte.com.cn>
      
      Closes #18525 from guoxiaolongzte/SPARK-21297.
      d79816dd
    • GuoChenzhao's avatar
      [SQL] Fix typo in DataframeWriter doc · 51f99fb2
      GuoChenzhao authored
      ## What changes were proposed in this pull request?
      
      The format of none should be consistent with other compression codec(\`snappy\`, \`lz4\`) as \`none\`.
      
      ## How was this patch tested?
      
      This is a typo.
      
      Author: GuoChenzhao <chenzhao.guo@intel.com>
      
      Closes #18758 from gczsjdy/typo.
      51f99fb2
  7. Jul 29, 2017
    • Takeshi Yamamuro's avatar
      [SPARK-20962][SQL] Support subquery column aliases in FROM clause · 6550086b
      Takeshi Yamamuro authored
      ## What changes were proposed in this pull request?
      This pr added parsing rules to support subquery column aliases in FROM clause.
      This pr is a sub-task of #18079.
      
      ## How was this patch tested?
      Added tests in `PlanParserSuite` and `SQLQueryTestSuite`.
      
      Author: Takeshi Yamamuro <yamamuro@apache.org>
      
      Closes #18185 from maropu/SPARK-20962.
      6550086b
    • Xingbo Jiang's avatar
      [SPARK-19451][SQL] rangeBetween method should accept Long value as boundary · 92d85637
      Xingbo Jiang authored
      ## What changes were proposed in this pull request?
      
      Long values can be passed to `rangeBetween` as range frame boundaries, but we silently convert it to Int values, this can cause wrong results and we should fix this.
      
      Further more, we should accept any legal literal values as range frame boundaries. In this PR, we make it possible for Long values, and make accepting other DataTypes really easy to add.
      
      This PR is mostly based on Herman's previous amazing work: https://github.com/hvanhovell/spark/commit/596f53c339b1b4629f5651070e56a8836a397768
      
      After this been merged, we can close #16818 .
      
      ## How was this patch tested?
      
      Add new tests in `DataFrameWindowFunctionsSuite` and `TypeCoercionSuite`.
      
      Author: Xingbo Jiang <xingbo.jiang@databricks.com>
      
      Closes #18540 from jiangxb1987/rangeFrame.
      92d85637
    • Liang-Chi Hsieh's avatar
      [SPARK-21555][SQL] RuntimeReplaceable should be compared semantically by its canonicalized child · 9c8109ef
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      When there are aliases (these aliases were added for nested fields) as parameters in `RuntimeReplaceable`, as they are not in the children expression, those aliases can't be cleaned up in analyzer rule `CleanupAliases`.
      
      An expression `nvl(foo.foo1, "value")` can be resolved to two semantically different expressions in a group by query because they contain different aliases.
      
      Because those aliases are not children of `RuntimeReplaceable` which is an `UnaryExpression`. So we can't trim the aliases out by simple transforming the expressions in `CleanupAliases`.
      
      If we want to replace the non-children aliases in `RuntimeReplaceable`, we need to add more codes to `RuntimeReplaceable` and modify all expressions of `RuntimeReplaceable`. It makes the interface ugly IMO.
      
      Consider those aliases will be replaced later at optimization and so they're no harm, this patch chooses to simply override `canonicalized` of `RuntimeReplaceable`.
      
      One concern is about `CleanupAliases`. Because it actually cannot clean up ALL aliases inside a plan. To make caller of this rule notice that, this patch adds a comment to `CleanupAliases`.
      
      ## How was this patch tested?
      
      Added test.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #18761 from viirya/SPARK-21555.
      9c8109ef
    • shaofei007's avatar
      [SPARK-21357][DSTREAMS] FileInputDStream not remove out of date RDD · 60e9b2bd
      shaofei007 authored
      ## What changes were proposed in this pull request?
      
      ```DStreams
               class FileInputDStream
      
       [line 162]   protected[streaming] override def clearMetadata(time: Time) {
          batchTimeToSelectedFiles.synchronized {
            val oldFiles = batchTimeToSelectedFiles.filter(_._1 < (time - rememberDuration))
            batchTimeToSelectedFiles --= oldFiles.keys
      
      ```
      The above code does not remove the old generatedRDDs. "super.clearMetadata(time)" was added to the beginning of clearMetadata to remove the old generatedRDDs.
      
      ## How was this patch tested?
      
      At the end of clearMetadata, the testing code (print the number of generatedRDDs) was added to check the old RDDS were removed manually.
      
      Author: shaofei007 <1427357147@qq.com>
      Author: Fei Shao <1427357147@qq.com>
      
      Closes #18718 from shaofei007/master.
      60e9b2bd
    • Remis Haroon's avatar
      [SPARK-21508][DOC] Fix example code provided in Spark Streaming Documentation · c1438203
      Remis Haroon authored
      ## What changes were proposed in this pull request?
      
      JIRA ticket : [SPARK-21508](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-21508)
      
      correcting a mistake in example code provided in Spark Streaming Custom Receivers Documentation
      The example code provided in the documentation on 'Spark Streaming Custom Receivers' has an error.
      doc link : https://spark.apache.org/docs/latest/streaming-custom-receivers.html
      
      ```
      
      // Assuming ssc is the StreamingContext
      val customReceiverStream = ssc.receiverStream(new CustomReceiver(host, port))
      val words = lines.flatMap(_.split(" "))
      ...
      ```
      
      instead of `lines.flatMap(_.split(" "))`
      it should be `customReceiverStream.flatMap(_.split(" "))`
      
      ## How was this patch tested?
      this documentation change is tested manually by jekyll build , running below commands
      ```
      jekyll build
      jekyll serve --watch
      ```
      screen-shots provided below
      ![screenshot1](https://user-images.githubusercontent.com/8828470/28744636-a6de1ac6-7482-11e7-843b-ff84b5855ec0.png)
      ![screenshot2](https://user-images.githubusercontent.com/8828470/28744637-a6def496-7482-11e7-9512-7f4bbe027c6a.png)
      
      Author: Remis Haroon <Remis.Haroon@insdc01.pwc.com>
      
      Closes #18770 from remisharoon/master.
      c1438203
  8. Jul 28, 2017
    • hyukjinkwon's avatar
      [SPARK-20090][PYTHON] Add StructType.fieldNames in PySpark · b56f79cc
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes `StructType.fieldNames` that returns a copy of a field name list rather than a (undocumented) `StructType.names`.
      
      There are two points here:
      
        - API consistency with Scala/Java
      
        - Provide a safe way to get the field names. Manipulating these might cause unexpected behaviour as below:
      
          ```python
          from pyspark.sql.types import *
      
          struct = StructType([StructField("f1", StringType(), True)])
          names = struct.names
          del names[0]
          spark.createDataFrame([{"f1": 1}], struct).show()
          ```
      
          ```
          ...
          java.lang.IllegalStateException: Input row doesn't have expected number of values required by the schema. 1 fields are required while 0 values are provided.
          	at org.apache.spark.sql.execution.python.EvaluatePython$.fromJava(EvaluatePython.scala:138)
          	at org.apache.spark.sql.SparkSession$$anonfun$6.apply(SparkSession.scala:741)
          	at org.apache.spark.sql.SparkSession$$anonfun$6.apply(SparkSession.scala:741)
          ...
          ```
      
      ## How was this patch tested?
      
      Added tests in `python/pyspark/sql/tests.py`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18618 from HyukjinKwon/SPARK-20090.
      b56f79cc
Loading