Skip to content
Snippets Groups Projects
  1. Mar 17, 2015
    • Lomig Mégard's avatar
      [SQL][docs][minor] Fixed sample code in SQLContext scaladoc · 68707225
      Lomig Mégard authored
      Error in the code sample of the `implicits` object in `SQLContext`.
      
      Author: Lomig Mégard <lomig.megard@gmail.com>
      
      Closes #5051 from tarfaa/simple and squashes the following commits:
      
      5a88acc [Lomig Mégard] [docs][minor] Fixed sample code in SQLContext scaladoc
      68707225
    • Kevin (Sangwoo) Kim's avatar
      [SPARK-6299][CORE] ClassNotFoundException in standalone mode when running... · f0edeae7
      Kevin (Sangwoo) Kim authored
      [SPARK-6299][CORE] ClassNotFoundException in standalone mode when running groupByKey with class defined in REPL
      
      ```
      case class ClassA(value: String)
      val rdd = sc.parallelize(List(("k1", ClassA("v1")), ("k1", ClassA("v2")) ))
      rdd.groupByKey.collect
      ```
      This code used to be throw exception in spark-shell, because while shuffling ```JavaSerializer```uses ```defaultClassLoader``` which was defined like ```env.serializer.setDefaultClassLoader(urlClassLoader)```.
      
      It should be ```env.serializer.setDefaultClassLoader(replClassLoader)```, like
      ```
          override def run() {
            val deserializeStartTime = System.currentTimeMillis()
            Thread.currentThread.setContextClassLoader(replClassLoader)
      ```
      in TaskRunner.
      
      When ```replClassLoader``` cannot be defined, it's identical with ```urlClassLoader```
      
      Author: Kevin (Sangwoo) Kim <sangwookim.me@gmail.com>
      
      Closes #5046 from swkimme/master and squashes the following commits:
      
      fa2b9ee [Kevin (Sangwoo) Kim] stylish test codes ( collect -> collect() )
      6e9620b [Kevin (Sangwoo) Kim] stylish test codes ( collect -> collect() )
      d23e4e2 [Kevin (Sangwoo) Kim] stylish test codes ( collect -> collect() )
      a4a3c8a [Kevin (Sangwoo) Kim] add 'class defined in repl - shuffle' test to ReplSuite
      bd00da5 [Kevin (Sangwoo) Kim] add 'class defined in repl - shuffle' test to ReplSuite
      c1b1fc7 [Kevin (Sangwoo) Kim] use REPL class loader for executor's serializer
      f0edeae7
  2. Mar 16, 2015
    • Daoyuan Wang's avatar
      [SPARK-5712] [SQL] fix comment with semicolon at end · 9667b9f9
      Daoyuan Wang authored
      ---- comment;
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #4500 from adrian-wang/semicolon and squashes the following commits:
      
      70b8abb [Daoyuan Wang] use mkstring instead of reduce
      2d49738 [Daoyuan Wang] remove outdated golden file
      317346e [Daoyuan Wang] only skip comment with semicolon at end of line, to avoid golden file outdated
      d3ae01e [Daoyuan Wang] fix error
      a11602d [Daoyuan Wang] fix comment with semicolon at end
      9667b9f9
    • Davies Liu's avatar
      [SPARK-6327] [PySpark] fix launch spark-submit from python · e3f315ac
      Davies Liu authored
      SparkSubmit should be launched without setting PYSPARK_SUBMIT_ARGS
      
      cc JoshRosen , this mode is actually used by python unit test, so I will not add more test for it.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #5019 from davies/fix_submit and squashes the following commits:
      
      2c20b0c [Davies Liu] fix launch spark-submit from python
      e3f315ac
    • lisurprise's avatar
      [SPARK-6077] Remove streaming tab while stopping StreamingContext · f149b8b5
      lisurprise authored
      Currently we would create a new streaming tab for each streamingContext even if there's already one on the same sparkContext which would cause duplicate StreamingTab created and none of them is taking effect.
      snapshot: https://www.dropbox.com/s/t4gd6hqyqo0nivz/bad%20multiple%20streamings.png?dl=0
      How to reproduce:
      1)
      import org.apache.spark.SparkConf
      import org.apache.spark.streaming.
      {Seconds, StreamingContext}
      import org.apache.spark.storage.StorageLevel
      val ssc = new StreamingContext(sc, Seconds(1))
      val lines = ssc.socketTextStream("localhost", 9999, StorageLevel.MEMORY_AND_DISK_SER)
      val words = lines.flatMap(_.split(" "))
      val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
      wordCounts.print()
      ssc.start()
      .....
      2)
      ssc.stop(false)
      val ssc = new StreamingContext(sc, Seconds(1))
      val lines = ssc.socketTextStream("localhost", 9999, StorageLevel.MEMORY_AND_DISK_SER)
      val words = lines.flatMap(_.split(" "))
      val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
      wordCounts.print()
      ssc.start()
      
      Author: lisurprise <zhichao.li@intel.com>
      
      Closes #4828 from zhichao-li/master and squashes the following commits:
      
      c329806 [lisurprise] add test for attaching/detaching streaming tab
      51e6c7f [lisurprise] move detach method into StreamingTab
      31a44fa [lisurprise] add unit test for attaching and detaching new tab
      db25ed2 [lisurprise] clean code
      8281bcb [lisurprise] clean code
      193c542 [lisurprise] remove streaming tab while closing streaming context
      f149b8b5
    • Volodymyr Lyubinets's avatar
      [SPARK-6330] Fix filesystem bug in newParquet relation · d19efedd
      Volodymyr Lyubinets authored
      If I'm running this locally and my path points to S3, this would currently error out because of incorrect FS.
      I tested this in a scenario that previously didn't work, this change seemed to fix the issue.
      
      Author: Volodymyr Lyubinets <vlyubin@gmail.com>
      
      Closes #5020 from vlyubin/parquertbug and squashes the following commits:
      
      a645ad5 [Volodymyr Lyubinets] Fix filesystem bug in newParquet relation
      d19efedd
    • Cheng Hao's avatar
      [SPARK-2087] [SQL] Multiple thriftserver sessions with single HiveContext instance · 12a345ad
      Cheng Hao authored
      Still, we keep only a single HiveContext within ThriftServer, and we also create a object called `SQLSession` for isolating the different user states.
      
      Developers can obtain/release a new user session via `openSession` and `closeSession`, and `SQLContext` and `HiveContext` will also provide a default session if no `openSession` called, for backward-compatibility.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #4885 from chenghao-intel/multisessions_singlecontext and squashes the following commits:
      
      1c47b2a [Cheng Hao] rename the tss => tlSession
      815b27a [Cheng Hao] code style issue
      57e3fa0 [Cheng Hao] openSession is not compatible between Hive0.12 & 0.13.1
      4665b0d [Cheng Hao] thriftservice with single context
      12a345ad
    • DoingDone9's avatar
      [SPARK-6300][Spark Core] sc.addFile(path) does not support the relative path. · 00e730b9
      DoingDone9 authored
      when i run cmd like that sc.addFile("../test.txt"), it did not work and throwed an exception:
      java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:../test.txt
      at org.apache.hadoop.fs.Path.initialize(Path.java:206)
      at org.apache.hadoop.fs.Path.<init>(Path.java:172)
      ........
      .......
      Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:../test.txt
      at java.net.URI.checkPath(URI.java:1804)
      at java.net.URI.<init>(URI.java:752)
      at org.apache.hadoop.fs.Path.initialize(Path.java:203)
      
      Author: DoingDone9 <799203320@qq.com>
      
      Closes #4993 from DoingDone9/relativePath and squashes the following commits:
      
      ee375cd [DoingDone9] Update SparkContextSuite.scala
      d594e16 [DoingDone9] Update SparkContext.scala
      0ff3fa8 [DoingDone9] test for add file
      dced8eb [DoingDone9] Update SparkContext.scala
      e4a13fe [DoingDone9] getCanonicalPath
      161cae3 [DoingDone9] Merge pull request #4 from apache/master
      c87e8b6 [DoingDone9] Merge pull request #3 from apache/master
      cb1852d [DoingDone9] Merge pull request #2 from apache/master
      c3f046f [DoingDone9] Merge pull request #1 from apache/master
      00e730b9
    • Brennon York's avatar
      [SPARK-5922][GraphX]: Add diff(other: RDD[VertexId, VD]) in VertexRDD · 45f4c661
      Brennon York authored
      Changed method invocation of 'diff' to match that of 'innerJoin' and 'leftJoin' from VertexRDD[VD] to RDD[(VertexId, VD)]. This change maintains backwards compatibility and better unifies the VertexRDD methods to match each other.
      
      Author: Brennon York <brennon.york@capitalone.com>
      
      Closes #4733 from brennonyork/SPARK-5922 and squashes the following commits:
      
      e800f08 [Brennon York] fixed merge conflicts
      b9274af [Brennon York] fixed merge conflicts
      f86375c [Brennon York] fixed minor include line
      398ddb4 [Brennon York] fixed merge conflicts
      aac1810 [Brennon York] updated to aggregateUsingIndex and added test to ensure that method works properly
      2af0b88 [Brennon York] removed deprecation line
      753c963 [Brennon York] fixed merge conflicts and set preference to use the diff(other: VertexRDD[VD]) method
      2c678c6 [Brennon York] added mima exclude to exclude new public diff method from VertexRDD
      93186f3 [Brennon York] added back the original diff method to sustain binary compatibility
      f18356e [Brennon York] changed method invocation of 'diff' to match that of 'innerJoin' and 'leftJoin' from VertexRDD[VD] to RDD[(VertexId, VD)]
      45f4c661
  3. Mar 15, 2015
    • Jongyoul Lee's avatar
      [SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688 · aa6536fa
      Jongyoul Lee authored
      - MESOS_NATIVE_LIBRARY become deprecated
      - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY
      
      Author: Jongyoul Lee <jongyoul@gmail.com>
      
      Closes #4361 from jongyoul/SPARK-3619-1 and squashes the following commits:
      
      f1ea91f [Jongyoul Lee] Merge branch 'SPARK-3619-1' of https://github.com/jongyoul/spark into SPARK-3619-1
      a6a00c2 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - Removed 'Known issues' section
      2e15a21 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY
      0dace7b [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY
      aa6536fa
    • OopsOutOfMemory's avatar
      [SPARK-6285][SQL]Remove ParquetTestData in SparkBuild.scala and in README.md · 62ede538
      OopsOutOfMemory authored
      This is a following clean up PR for #5010
      This will resolve issues when launching `hive/console` like below:
      ```
      <console>:20: error: object ParquetTestData is not a member of package org.apache.spark.sql.parquet
             import org.apache.spark.sql.parquet.ParquetTestData
      ```
      
      Author: OopsOutOfMemory <victorshengli@126.com>
      
      Closes #5032 from OopsOutOfMemory/SPARK-6285 and squashes the following commits:
      
      2996aeb [OopsOutOfMemory] remove ParquetTestData
      62ede538
  4. Mar 14, 2015
    • Brennon York's avatar
      [SPARK-5790][GraphX]: VertexRDD's won't zip properly for `diff` capability (added tests) · c49d1566
      Brennon York authored
      Added tests that maropu [created](https://github.com/maropu/spark/blob/1f64794b2ce33e64f340e383d4e8a60639a7eb4b/graphx/src/test/scala/org/apache/spark/graphx/VertexRDDSuite.scala) for vertices with differing partition counts. Wanted to make sure his work got captured /merged as its not in the master branch and I don't believe there's a PR out already for it.
      
      Author: Brennon York <brennon.york@capitalone.com>
      
      Closes #5023 from brennonyork/SPARK-5790 and squashes the following commits:
      
      83bbd29 [Brennon York] added maropu's tests for vertices with differing partition counts
      c49d1566
    • Brennon York's avatar
      [SPARK-6329][Docs]: Minor doc changes for Mesos and TOC · 127268bc
      Brennon York authored
      Updated the configuration docs from the minor items that Reynold had left over from SPARK-1182; specifically I updated the `running-on-mesos` link to point directly to `running-on-mesos#configuration` and upgraded the `yarn`, `mesos`, etc. bullets to `<h5>` tags in hopes that they'll get pushed into the TOC.
      
      Author: Brennon York <brennon.york@capitalone.com>
      
      Closes #5022 from brennonyork/SPARK-6329 and squashes the following commits:
      
      42a10a9 [Brennon York] minor doc fixes
      127268bc
    • Cheng Lian's avatar
      [SPARK-6195] [SQL] Adds in-memory column type for fixed-precision decimals · 5be6b0e4
      Cheng Lian authored
      This PR adds a specialized in-memory column type for fixed-precision decimals.
      
      For all other column types, a single integer column type ID is enough to determine which column type to use. However, this doesn't apply to fixed-precision decimal types with different precision and scale parameters. Moreover, according to the previous design, there seems no trivial way to encode precision and scale information into the columnar byte buffer. On the other hand, considering we always know the data type of the column to be built / scanned ahead of time. This PR no longer use column type ID to construct `ColumnBuilder`s and `ColumnAccessor`s, but resorts to the actual column data type. In this way, we can pass precision / scale information along the way.
      
      The column type ID is now not used anymore and can be removed in a future PR.
      
      ### Micro benchmark result
      
      The following micro benchmark builds a simple table with 2 million decimals (precision = 10, scale = 0), cache it in memory, then count all the rows. Code (simply paste it into Spark shell):
      
      ```scala
      import sc._
      import sqlContext._
      import sqlContext.implicits._
      import org.apache.spark.sql.types._
      import com.google.common.base.Stopwatch
      
      def benchmark(n: Int)(f: => Long) {
        val stopwatch = new Stopwatch()
      
        def run() = {
          stopwatch.reset()
          stopwatch.start()
          f
          stopwatch.stop()
          stopwatch.elapsedMillis()
        }
      
        val records = (0 until n).map(_ => run())
      
        (0 until n).foreach(i => println(s"Round $i: ${records(i)} ms"))
        println(s"Average: ${records.sum / n.toDouble} ms")
      }
      
      // Explicit casting is required because ScalaReflection can't inspect decimal precision
      parallelize(1 to 2000000)
        .map(i => Tuple1(Decimal(i, 10, 0)))
        .toDF("dec")
        .select($"dec" cast DecimalType(10, 0))
        .registerTempTable("dec")
      
      sql("CACHE TABLE dec")
      val df = table("dec")
      
      // Warm up
      df.count()
      df.count()
      
      benchmark(5) {
        df.count()
      }
      ```
      
      With `FIXED_DECIMAL` column type:
      
      - Round 0: 75 ms
      - Round 1: 97 ms
      - Round 2: 75 ms
      - Round 3: 70 ms
      - Round 4: 72 ms
      - Average: 77.8 ms
      
      Without `FIXED_DECIMAL` column type:
      
      - Round 0: 1233 ms
      - Round 1: 1170 ms
      - Round 2: 1171 ms
      - Round 3: 1141 ms
      - Round 4: 1141 ms
      - Average: 1171.2 ms
      
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4938)
      <!-- Reviewable:end -->
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #4938 from liancheng/decimal-column-type and squashes the following commits:
      
      fef5338 [Cheng Lian] Updates fixed decimal column type related test cases
      e08ab5b [Cheng Lian] Only resorts to FIXED_DECIMAL when the value can be held in a long
      4db713d [Cheng Lian] Adds in-memory column type for fixed-precision decimals
      5be6b0e4
    • ArcherShao's avatar
      [SQL]Delete some dupliate code in HiveThriftServer2 · ee15404a
      ArcherShao authored
      Author: ArcherShao <ArcherShao@users.noreply.github.com>
      Author: ArcherShao <shaochuan@huawei.com>
      
      Closes #5007 from ArcherShao/20150313 and squashes the following commits:
      
      ae422ae [ArcherShao] Updated
      459efbd [ArcherShao] [SQL]Delete some dupliate code in HiveThriftServer2
      ee15404a
    • Davies Liu's avatar
      [SPARK-6210] [SQL] use prettyString as column name in agg() · b38e073f
      Davies Liu authored
      use prettyString instead of toString() (which include id of expression) as column name in agg()
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #5006 from davies/prettystring and squashes the following commits:
      
      cb1fdcf [Davies Liu] use prettyString as column name in agg()
      b38e073f
  5. Mar 13, 2015
    • vinodkc's avatar
      [SPARK-6317][SQL]Fixed HIVE console startup issue · e360d5e4
      vinodkc authored
      Author: vinodkc <vinod.kc.in@gmail.com>
      Author: Vinod K C <vinod.kc@huawei.com>
      
      Closes #5011 from vinodkc/HIVE_console_startupError and squashes the following commits:
      
      b43925f [vinodkc] Changed order of import
      b4f5453 [Vinod K C] Fixed HIVE console startup issue
      e360d5e4
    • Cheng Lian's avatar
      [SPARK-6285] [SQL] Removes unused ParquetTestData and duplicated TestGroupWriteSupport · cdc34ed9
      Cheng Lian authored
      All the contents in this file are not referenced anywhere and should have been removed in #4116 when I tried to get rid of the old Parquet test suites.
      
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/5010)
      <!-- Reviewable:end -->
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #5010 from liancheng/spark-6285 and squashes the following commits:
      
      06ed057 [Cheng Lian] Removes unused ParquetTestData and duplicated TestGroupWriteSupport
      cdc34ed9
    • Brennon York's avatar
      [SPARK-4600][GraphX]: org.apache.spark.graphx.VertexRDD.diff does not work · b943f5d9
      Brennon York authored
      Turns out, per the [convo on the JIRA](https://issues.apache.org/jira/browse/SPARK-4600), `diff` is acting exactly as should. It became a large misconception as I thought it meant set difference, when in fact it does not. To that extent I merely updated the `diff` documentation to, hopefully, better reflect its true intentions moving forward.
      
      Author: Brennon York <brennon.york@capitalone.com>
      
      Closes #5015 from brennonyork/SPARK-4600 and squashes the following commits:
      
      1e1d1e5 [Brennon York] reverted internal diff docs
      92288f7 [Brennon York] reverted both the test suite and the diff function back to its origin functionality
      f428623 [Brennon York] updated diff documentation to better represent its function
      cc16d65 [Brennon York] Merge remote-tracking branch 'upstream/master' into SPARK-4600
      66818b9 [Brennon York] added small secondary diff test
      99ad412 [Brennon York] Merge remote-tracking branch 'upstream/master' into SPARK-4600
      74b8c95 [Brennon York] corrected  method by leveraging bitmask operations to correctly return only the portions of  that are different from the calling VertexRDD
      9717120 [Brennon York] updated diff impl to cause fewer objects to be created
      710a21c [Brennon York] working diff given test case
      aa57f83 [Brennon York] updated to set ShortestPaths to run 'forward' rather than 'backward'
      b943f5d9
    • Xiangrui Meng's avatar
      [SPARK-6278][MLLIB] Mention the change of objective in linear regression · 7f13434a
      Xiangrui Meng authored
      As discussed in the RC3 vote thread, we should mention the change of objective in linear regression in the migration guide. srowen
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #4978 from mengxr/SPARK-6278 and squashes the following commits:
      
      fb3bbe6 [Xiangrui Meng] mention regularization parameter
      bfd6cff [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into SPARK-6278
      375fd09 [Xiangrui Meng] address Sean's comments
      f87ae71 [Xiangrui Meng] mention step size change
      7f13434a
    • Joseph K. Bradley's avatar
      [SPARK-6252] [mllib] Added getLambda to Scala NaiveBayes · dc4abd4d
      Joseph K. Bradley authored
      Note: not relevant for Python API since it only has a static train method
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #4969 from jkbradley/SPARK-6252 and squashes the following commits:
      
      a471d90 [Joseph K. Bradley] small edits from review
      63eff48 [Joseph K. Bradley] Added getLambda to Scala NaiveBayes
      dc4abd4d
    • Wenchen Fan's avatar
      [CORE][minor] remove unnecessary ClassTag in `DAGScheduler` · ea3d2eed
      Wenchen Fan authored
      This existed at the very beginning, but became unnecessary after [this commit](https://github.com/apache/spark/commit/37d8f37a8ec110416fba0d51d8ba70370ac380c1#diff-6a9ff7fb74fd490a50462d45db2d5e11L272). I think we should remove it if we don't plan to use it in the future.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #4992 from cloud-fan/small and squashes the following commits:
      
      e857f2e [Wenchen Fan] remove unnecessary ClassTag
      ea3d2eed
    • Zhang, Liye's avatar
      [SPARK-6197][CORE] handle json exception when hisotry file not finished writing · 9048e810
      Zhang, Liye authored
      For details, please refer to [SPARK-6197](https://issues.apache.org/jira/browse/SPARK-6197)
      
      Author: Zhang, Liye <liye.zhang@intel.com>
      
      Closes #4927 from liyezhang556520/jsonParseError and squashes the following commits:
      
      5cbdc82 [Zhang, Liye] without unnecessary wrap
      2b48831 [Zhang, Liye] small changes with sean owen's comments
      2973024 [Zhang, Liye] handle json exception when file not finished writing
      9048e810
    • Cheng Lian's avatar
      [SPARK-5310] [SQL] [DOC] Parquet section for the SQL programming guide · 69ff8e8c
      Cheng Lian authored
      Also fixed a bunch of minor styling issues.
      
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/5001)
      <!-- Reviewable:end -->
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #5001 from liancheng/parquet-doc and squashes the following commits:
      
      89ad3db [Cheng Lian] Addresses @rxin's comments
      7eb6955 [Cheng Lian] Docs for the new Parquet data source
      415eefb [Cheng Lian] Some minor formatting improvements
      69ff8e8c
    • Ilya Ganelin's avatar
      [SPARK-5845][Shuffle] Time to cleanup spilled shuffle files not included in shuffle write time · 0af9ea74
      Ilya Ganelin authored
      I've added a timer in the right place to fix this inaccuracy.
      
      Author: Ilya Ganelin <ilya.ganelin@capitalone.com>
      
      Closes #4965 from ilganeli/SPARK-5845 and squashes the following commits:
      
      bfabf88 [Ilya Ganelin] Changed to using a foreach vs. getorelse
      3e059b0 [Ilya Ganelin] Switched to using getorelse
      b946d08 [Ilya Ganelin] Fixed error with option
      9434b50 [Ilya Ganelin] Merge remote-tracking branch 'upstream/master' into SPARK-5845
      db8647e [Ilya Ganelin] Added update for shuffleWriteTime around spilled file cleanup in ExternalSorter
      0af9ea74
  6. Mar 12, 2015
  7. Mar 11, 2015
    • Tathagata Das's avatar
      [SPARK-6128][Streaming][Documentation] Updates to Spark Streaming Programming Guide · cd3b68d9
      Tathagata Das authored
      Updates to the documentation are as follows:
      
      - Added information on Kafka Direct API and Kafka Python API
      - Added joins to the main streaming guide
      - Improved details on the fault-tolerance semantics
      
      Generated docs located here
      http://people.apache.org/~tdas/spark-1.3.0-temp-docs/streaming-programming-guide.html#fault-tolerance-semantics
      
      More things to add:
      - Configuration for Kafka receive rate
      - May be add concurrentJobs
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #4956 from tdas/streaming-guide-update-1.3 and squashes the following commits:
      
      819408c [Tathagata Das] Minor fixes.
      debe484 [Tathagata Das] Added DataFrames and MLlib
      380cf8d [Tathagata Das] Fix link
      04167a6 [Tathagata Das] Merge remote-tracking branch 'apache-github/master' into streaming-guide-update-1.3
      0b77486 [Tathagata Das] Updates based on Josh's comments.
      86c4c2a [Tathagata Das] Updated streaming guides
      82de92a [Tathagata Das] Add Kafka to Python api docs
      cd3b68d9
    • Tathagata Das's avatar
      [SPARK-6274][Streaming][Examples] Added examples streaming + sql examples. · 51a79a77
      Tathagata Das authored
      Added Scala, Java and Python streaming examples showing DataFrame and SQL operations within streaming.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #4975 from tdas/streaming-sql-examples and squashes the following commits:
      
      705cba1 [Tathagata Das] Fixed python lint error
      75a3fad [Tathagata Das] Fixed python lint error
      5fbf789 [Tathagata Das] Removed empty lines at the end
      874b943 [Tathagata Das] Added examples streaming + sql examples.
      51a79a77
    • Sean Owen's avatar
      SPARK-6245 [SQL] jsonRDD() of empty RDD results in exception · 55c4831d
      Sean Owen authored
      Avoid `UnsupportedOperationException` from JsonRDD.inferSchema on empty RDD.
      
      Not sure if this is supposed to be an error (but a better one), but it seems like this case can come up if the input is down-sampled so much that nothing is sampled.
      
      Now stuff like this:
      ```
      sqlContext.jsonRDD(sc.parallelize(List[String]()))
      ```
      just results in
      ```
      org.apache.spark.sql.DataFrame = []
      ```
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #4971 from srowen/SPARK-6245 and squashes the following commits:
      
      3699964 [Sean Owen] Set() -> Set.empty
      3c619e1 [Sean Owen] Avoid UnsupportedOperationException from JsonRDD.inferSchema on empty RDD
      55c4831d
    • Sandy Ryza's avatar
      SPARK-3642. Document the nuances of shared variables. · 2d87a415
      Sandy Ryza authored
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #2490 from sryza/sandy-spark-3642 and squashes the following commits:
      
      aae3340 [Sandy Ryza] SPARK-3642. Document the nuances of broadcast variables
      2d87a415
Loading