Skip to content
Snippets Groups Projects
  1. Mar 24, 2015
    • Reynold Xin's avatar
      [SPARK-6428][Streaming] Added explicit types for all public methods. · 94598653
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #5110 from rxin/streaming-explicit-type and squashes the following commits:
      
      2c2db32 [Reynold Xin] [SPARK-6428][Streaming] Added explicit types for all public methods.
      94598653
    • Xiangrui Meng's avatar
      [SPARK-6512] add contains to OpenHashMap · 6930e965
      Xiangrui Meng authored
      Add `contains` to test whether a key exists in an OpenHashMap. rxin
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #5171 from mengxr/openhashmap-contains and squashes the following commits:
      
      d6e6f1f [Xiangrui Meng] add contains to primitivekeyopenhashmap
      748a69b [Xiangrui Meng] add contains to OpenHashMap
      6930e965
    • Christophe Préaud's avatar
      [SPARK-6469] Improving documentation on YARN local directories usage · 05c2214b
      Christophe Préaud authored
      Clarify the local directories usage in YARN
      
      Author: Christophe Préaud <christophe.preaud@kelkoo.com>
      
      Closes #5165 from preaudc/yarn-doc-local-dirs and squashes the following commits:
      
      6912b90 [Christophe Préaud] Fix some formatting issues.
      4fa8ec2 [Christophe Préaud] Merge remote-tracking branch 'upstream/master' into yarn-doc-local-dirs
      eaaf519 [Christophe Préaud] Clarify the local directories usage in YARN
      436fb7d [Christophe Préaud] Revert "Clarify the local directories usage in YARN"
      876ae5e [Christophe Préaud] Clarify the local directories usage in YARN
      608dbfa [Christophe Préaud] Merge remote-tracking branch 'upstream/master'
      a49a2ce [Christophe Préaud] Merge remote-tracking branch 'upstream/master'
      9ba89ca [Christophe Préaud] Ensure that files are fetched atomically
      54419ae [Christophe Préaud] Merge remote-tracking branch 'upstream/master'
      c6a5590 [Christophe Préaud] Revert commit 8ea871f8130b2490f1bad7374a819bf56f0ccbbd
      7456a33 [Christophe Préaud] Merge remote-tracking branch 'upstream/master'
      8ea871f [Christophe Préaud] Ensure that files are fetched atomically
      05c2214b
    • Andrew Or's avatar
      Revert "[SPARK-5771] Number of Cores in Completed Applications of Standalone... · dd907d1a
      Andrew Or authored
      Revert "[SPARK-5771] Number of Cores in Completed Applications of Standalone Master Web Page always be 0 if sc.stop() is called"
      
      This reverts commit dd077abf.
      
      Conflicts:
      	core/src/main/scala/org/apache/spark/deploy/master/ApplicationInfo.scala
      	core/src/main/scala/org/apache/spark/deploy/master/ui/MasterPage.scala
      dd907d1a
    • Andrew Or's avatar
    • Kay Ousterhout's avatar
      [SPARK-3570] Include time to open files in shuffle write time. · d8ccf655
      Kay Ousterhout authored
      Opening shuffle files can be very significant when the disk is
      contended, especially when using ext3. While writing data to
      a file can avoid hitting disk (and instead hit the buffer
      cache), opening a file always involves writing some metadata
      about the file to disk, so the open time can be a very significant
      portion of the shuffle write time. In one job I ran recently, the time to
      write shuffle data to the file was only 4ms for each task, but
      the time to open the file was about 100x as long (~400ms).
      
      When we add metrics about spilled data (#2504), we should ensure
      that the file open time is also included there.
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #4550 from kayousterhout/SPARK-3570 and squashes the following commits:
      
      ea3a4ae [Kay Ousterhout] Added comment about excluded open time
      fdc5185 [Kay Ousterhout] Improved comment
      42b7e43 [Kay Ousterhout] Fixed parens for nanotime
      2423555 [Kay Ousterhout] [SPARK-3570] Include time to open files in shuffle write time.
      d8ccf655
    • Kay Ousterhout's avatar
      [SPARK-6088] Correct how tasks that get remote results are shown in UI. · 6948ab6f
      Kay Ousterhout authored
      It would be great to fix this for 1.3. since the fix is surgical and it helps understandability for users.
      
      cc shivaram pwendell
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #4839 from kayousterhout/SPARK-6088 and squashes the following commits:
      
      3ab012c [Kay Ousterhout] Update getting result time incrementally, correctly set GET_RESULT status
      f346b49 [Kay Ousterhout] Typos
      748ea6b [Kay Ousterhout] Fixed build failure
      84d617c [Kay Ousterhout] [SPARK-6088] Correct how tasks that get remote results are shown in the UI.
      6948ab6f
    • Reynold Xin's avatar
      [SPARK-6428][SQL] Added explicit types for all public methods in catalyst · 73348012
      Reynold Xin authored
      I think after this PR, we can finally turn the rule on. There are still some smaller ones that need to be fixed, but those are easier.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #5162 from rxin/catalyst-explicit-types and squashes the following commits:
      
      e7eac03 [Reynold Xin] [SPARK-6428][SQL] Added explicit types for all public methods in catalyst.
      73348012
    • Josh Rosen's avatar
      [SPARK-6209] Clean up connections in ExecutorClassLoader after failing to load... · 7215aa74
      Josh Rosen authored
      [SPARK-6209] Clean up connections in ExecutorClassLoader after failing to load classes (master branch PR)
      
      ExecutorClassLoader does not ensure proper cleanup of network connections that it opens. If it fails to load a class, it may leak partially-consumed InputStreams that are connected to the REPL's HTTP class server, causing that server to exhaust its thread pool, which can cause the entire job to hang.  See [SPARK-6209](https://issues.apache.org/jira/browse/SPARK-6209) for more details, including a bug reproduction.
      
      This patch fixes this issue by ensuring proper cleanup of these resources.  It also adds logging for unexpected error cases.
      
      This PR is an extended version of #4935 and adds a regression test.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #4944 from JoshRosen/executorclassloader-leak-master-branch and squashes the following commits:
      
      e0e3c25 [Josh Rosen] Wrap try block around getReponseCode; re-enable keep-alive by closing error stream
      961c284 [Josh Rosen] Roll back changes that were added to get the regression test to fail
      7ee2261 [Josh Rosen] Add a failing regression test
      e2d70a3 [Josh Rosen] Properly clean up after errors in ExecutorClassLoader
      7215aa74
    • Michael Armbrust's avatar
      [SPARK-6458][SQL] Better error messages for invalid data sources · a8f51b82
      Michael Armbrust authored
      Avoid unclear match errors and use `AnalysisException`.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #5158 from marmbrus/dataSourceError and squashes the following commits:
      
      af9f82a [Michael Armbrust] Yins comment
      90c6ba4 [Michael Armbrust] Better error messages for invalid data sources
      a8f51b82
    • Michael Armbrust's avatar
      [SPARK-6376][SQL] Avoid eliminating subqueries until optimization · cbeaf9eb
      Michael Armbrust authored
      Previously it was okay to throw away subqueries after analysis, as we would never try to use that tree for resolution again.  However, with eager analysis in `DataFrame`s this can cause errors for queries such as:
      
      ```scala
      val df = Seq(1,2,3).map(i => (i, i.toString)).toDF("int", "str")
      df.as('x).join(df.as('y), $"x.str" === $"y.str").groupBy("x.str").count()
      ```
      
      As a result, in this PR we defer the elimination of subqueries until the optimization phase.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #5160 from marmbrus/subqueriesInDfs and squashes the following commits:
      
      a9bb262 [Michael Armbrust] Update Optimizer.scala
      27d25bf [Michael Armbrust] fix hive tests
      9137e03 [Michael Armbrust] add type
      81cd597 [Michael Armbrust] Avoid eliminating subqueries until optimization
      cbeaf9eb
    • Michael Armbrust's avatar
      [SPARK-6375][SQL] Fix formatting of error messages. · 046c1e2a
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #5155 from marmbrus/errorMessages and squashes the following commits:
      
      b898188 [Michael Armbrust] Fix formatting of error messages.
      046c1e2a
    • Michael Armbrust's avatar
      [SPARK-6054][SQL] Fix transformations of TreeNodes that hold StructTypes · 3fa3d121
      Michael Armbrust authored
      Due to a recent change that made `StructType` a `Seq` we started inadvertently turning `StructType`s into generic `Traversable` when attempting nested tree transformations.  In this PR we explicitly avoid descending into `DataType`s to avoid this bug.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #5157 from marmbrus/udfFix and squashes the following commits:
      
      26f7087 [Michael Armbrust] Fix transformations of TreeNodes that hold StructTypes
      3fa3d121
    • Michael Armbrust's avatar
      [SPARK-6437][SQL] Use completion iterator to close external sorter · 26c6ce3d
      Michael Armbrust authored
      Otherwise we will leak files when spilling occurs.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #5161 from marmbrus/cleanupAfterSort and squashes the following commits:
      
      cb13d3c [Michael Armbrust] hint to inferencer
      cdebdf5 [Michael Armbrust] Use completion iterator to close external sorter
      26c6ce3d
    • Michael Armbrust's avatar
      [SPARK-6459][SQL] Warn when constructing trivially true equals predicate · 32efadd0
      Michael Armbrust authored
      For example, one might expect the following code to work, but it does not.  Now you will at least get a warning with a suggestion to use aliases.
      
      ```scala
      val df = sqlContext.load(path, "parquet")
      val txns = df.groupBy("cust_id").agg($"cust_id", countDistinct($"day_num").as("txns"))
      val spend = df.groupBy("cust_id").agg($"cust_id", sum($"extended_price").as("spend"))
      val rmJoin = txns.join(spend, txns("cust_id") === spend("cust_id"), "inner")
      ```
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #5163 from marmbrus/selfJoinError and squashes the following commits:
      
      16c1f0b [Michael Armbrust] fix visibility
      1b57e8d [Michael Armbrust] Warn when constructing trivially true equals predicate
      32efadd0
    • Xiangrui Meng's avatar
      [SPARK-6361][SQL] support adding a column with metadata in DF · 6bdddb6f
      Xiangrui Meng authored
      This is used by ML pipelines to embed ML attributes in columns created by ML transformers/estimators. marmbrus
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #5151 from mengxr/SPARK-6361 and squashes the following commits:
      
      bb30de3 [Xiangrui Meng] support adding a column with metadata in DF
      6bdddb6f
    • Xiangrui Meng's avatar
      [SPARK-6475][SQL] recognize array types when infer data types from JavaBeans · a1d1529d
      Xiangrui Meng authored
      Right now if there is a array field in a JavaBean, the user wold see an exception in `createDataFrame`. liancheng
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #5146 from mengxr/SPARK-6475 and squashes the following commits:
      
      51e87e5 [Xiangrui Meng] validate schemas
      4f2df5e [Xiangrui Meng] recognize array types when infer data types from JavaBeans
      a1d1529d
    • Peter Rudenko's avatar
      [ML][docs][minor] Define LabeledDocument/Document classes in CV example · 08d45280
      Peter Rudenko authored
      To easier copy/paste Cross-Validation example code snippet need to define LabeledDocument/Document in it, since they difined in a previous example.
      
      Author: Peter Rudenko <petro.rudenko@gmail.com>
      
      Closes #5135 from petro-rudenko/patch-3 and squashes the following commits:
      
      5190c75 [Peter Rudenko] Fix primitive types for java examples.
      1d35383 [Peter Rudenko] [SQL][docs][minor] Define LabeledDocument/Document classes in CV example
      08d45280
    • Kousuke Saruta's avatar
      [SPARK-5559] [Streaming] [Test] Remove oppotunity we met flakiness when running FlumeStreamSuite · 85cf0636
      Kousuke Saruta authored
      When we run FlumeStreamSuite on Jenkins, sometimes we get error like as follows.
      
          sbt.ForkMain$ForkError: The code passed to eventually never returned normally. Attempted 52 times over 10.094849836 seconds. Last failure message: Error connecting to localhost/127.0.0.1:23456.
      	    at org.scalatest.concurrent.Eventually$class.tryTryAgain$1(Eventually.scala:420)
      	    at org.scalatest.concurrent.Eventually$class.eventually(Eventually.scala:438)
      	    at org.scalatest.concurrent.Eventually$.eventually(Eventually.scala:478)
      	    at org.scalatest.concurrent.Eventually$class.eventually(Eventually.scala:307)
      	   at org.scalatest.concurrent.Eventually$.eventually(Eventually.scala:478)
      	   at org.apache.spark.streaming.flume.FlumeStreamSuite.writeAndVerify(FlumeStreamSuite.scala:116)
                 at org.apache.spark.streaming.flume.FlumeStreamSuite.org$apache$spark$streaming$flume$FlumeStreamSuite$$testFlumeStream(FlumeStreamSuite.scala:74)
      	   at org.apache.spark.streaming.flume.FlumeStreamSuite$$anonfun$3.apply$mcV$sp(FlumeStreamSuite.scala:66)
      	    at org.apache.spark.streaming.flume.FlumeStreamSuite$$anonfun$3.apply(FlumeStreamSuite.scala:66)
      	    at org.apache.spark.streaming.flume.FlumeStreamSuite$$anonfun$3.apply(FlumeStreamSuite.scala:66)
      	    at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
      	    at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
      	    at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
      	    at org.scalatest.Transformer.apply(Transformer.scala:22)
      	    at org.scalatest.Transformer.apply(Transformer.scala:20)
          	    at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
      	    at org.scalatest.Suite$class.withFixture(Suite.scala:1122)
      	    at org.scalatest.FunSuite.withFixture(FunSuite.scala:1555)
      	    at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
      	   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
      	    at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
      	    at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
      	    at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
      
      This error is caused by check-then-act logic  when it find free-port .
      
            /** Find a free port */
            private def findFreePort(): Int = {
              Utils.startServiceOnPort(23456, (trialPort: Int) => {
                val socket = new ServerSocket(trialPort)
                socket.close()
                (null, trialPort)
              }, conf)._2
            }
      
      Removing the check-then-act is not easy but we can reduce the chance of having the error by choosing random value for initial port instead of 23456.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #4337 from sarutak/SPARK-5559 and squashes the following commits:
      
      16f109f [Kousuke Saruta] Added `require` to Utils#startServiceOnPort
      c39d8b6 [Kousuke Saruta] Merge branch 'SPARK-5559' of github.com:sarutak/spark into SPARK-5559
      1610ba2 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-5559
      33357e3 [Kousuke Saruta] Changed "findFreePort" method in MQTTStreamSuite and FlumeStreamSuite so that it can choose valid random port
      a9029fe [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-5559
      9489ef9 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-5559
      8212e42 [Kousuke Saruta] Modified default port used in FlumeStreamSuite from 23456 to random value
      85cf0636
    • Marcelo Vanzin's avatar
      [SPARK-6473] [core] Do not try to figure out Scala version if not needed... · b293afc4
      Marcelo Vanzin authored
      ....
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #5143 from vanzin/SPARK-6473 and squashes the following commits:
      
      a2e5e2d [Marcelo Vanzin] [SPARK-6473] [core] Do not try to figure out Scala version if not needed.
      b293afc4
    • Cong Yue's avatar
      Update the command to use IPython notebook · c12312f8
      Cong Yue authored
      As for "notebook --pylab inline" is not supported any more, update the related documentation for this.
      
      Author: Cong Yue <yuecong1104@gmail.com>
      
      Closes #5111 from yuecong/patch-1 and squashes the following commits:
      
      872df76 [Cong Yue] Update the command to use IPython notebook
      c12312f8
    • Brennon York's avatar
      [SPARK-6477][Build]: Run MIMA tests before the Spark test suite · 37fac1dc
      Brennon York authored
      This moves the MIMA checks to before the full Spark test suite such that, if new PR's fail the MIMA check, they will return much faster having not run the entire test suite. This is preferable to the current scenario where a user would have to wait until the entire test suite completes before realizing it failed on a MIMA check in which case, once the MIMA issues are fixed, the user would have to resubmit and rerun the full test suite again.
      
      Author: Brennon York <brennon.york@capitalone.com>
      
      Closes #5145 from brennonyork/SPARK-6477 and squashes the following commits:
      
      12b0aee [Brennon York] updated to put the mima checks before the spark test suite
      37fac1dc
    • Cheng Lian's avatar
      [SPARK-6452] [SQL] Checks for missing attributes and unresolved operator for all types of operator · 1afcf773
      Cheng Lian authored
      In `CheckAnalysis`, `Filter` and `Aggregate` are checked in separate case clauses, thus never hit those clauses for unresolved operators and missing input attributes.
      
      This PR also removes the `prettyString` call when generating error message for missing input attributes. Because result of `prettyString` doesn't contain expression ID, and may give confusing messages like
      
      > resolved attributes a missing from a
      
      cc rxin
      
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/5129)
      <!-- Reviewable:end -->
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #5129 from liancheng/spark-6452 and squashes the following commits:
      
      52cdc69 [Cheng Lian] Addresses comments
      029f9bd [Cheng Lian] Checks for missing attributes and unresolved operator for all types of operator
      1afcf773
    • Reynold Xin's avatar
      [SPARK-6428] Added explicit types for all public methods in core. · 4ce2782a
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #5125 from rxin/core-explicit-type and squashes the following commits:
      
      f471415 [Reynold Xin] Revert style checker changes.
      81b66e4 [Reynold Xin] Code review feedback.
      a7533e3 [Reynold Xin] Mima excludes.
      1d795f5 [Reynold Xin] [SPARK-6428] Added explicit types for all public methods in core.
      4ce2782a
  2. Mar 23, 2015
  3. Mar 22, 2015
    • Cheng Lian's avatar
      Revert "[SPARK-6397][SQL] Check the missingInput simply" · bf044def
      Cheng Lian authored
      This reverts commit e566fe59.
      bf044def
    • q00251598's avatar
      [SPARK-6397][SQL] Check the missingInput simply · e566fe59
      q00251598 authored
      Author: q00251598 <qiyadong@huawei.com>
      
      Closes #5082 from watermen/sql-missingInput and squashes the following commits:
      
      25766b9 [q00251598] Check the missingInput simply
      e566fe59
    • Daoyuan Wang's avatar
      [SPARK-4985] [SQL] parquet support for date type · 4659468f
      Daoyuan Wang authored
      This PR might have some issues with #3732 ,
      and this would have merge conflicts with #3820 so the review can be delayed till that 2 were merged.
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #3822 from adrian-wang/parquetdate and squashes the following commits:
      
      2c5d54d [Daoyuan Wang] add a test case
      faef887 [Daoyuan Wang] parquet support for primitive date
      97e9080 [Daoyuan Wang] parquet support for date type
      4659468f
    • vinodkc's avatar
      [SPARK-6337][Documentation, SQL]Spark 1.3 doc fixes · 2bf40c58
      vinodkc authored
      Author: vinodkc <vinod.kc.in@gmail.com>
      
      Closes #5112 from vinodkc/spark_1.3_doc_fixes and squashes the following commits:
      
      2c6aee6 [vinodkc] Spark 1.3 doc fixes
      2bf40c58
    • Reynold Xin's avatar
    • Calvin Jia's avatar
      [SPARK-6122][Core] Upgrade Tachyon client version to 0.6.1. · a41b9c60
      Calvin Jia authored
      Changes the Tachyon client version from 0.5 to 0.6 in spark core and distribution script.
      
      New dependencies in Tachyon 0.6.0 include
      
      commons-codec:commons-codec:jar:1.5:compile
      io.netty:netty-all:jar:4.0.23.Final:compile
      
      These are already in spark core.
      
      Author: Calvin Jia <jia.calvin@gmail.com>
      
      Closes #4867 from calvinjia/upgrade_tachyon_0.6.0 and squashes the following commits:
      
      eed9230 [Calvin Jia] Update tachyon version to 0.6.1.
      11907b3 [Calvin Jia] Use TachyonURI for tachyon paths instead of strings.
      71bf441 [Calvin Jia] Upgrade Tachyon client version to 0.6.0.
      a41b9c60
    • Kamil Smuga's avatar
      SPARK-6454 [DOCS] Fix links to pyspark api · 6ef48632
      Kamil Smuga authored
      Author: Kamil Smuga <smugakamil@gmail.com>
      Author: stderr <smugakamil@gmail.com>
      
      Closes #5120 from kamilsmuga/master and squashes the following commits:
      
      fee3281 [Kamil Smuga] more python api links fixed for docs
      13240cb [Kamil Smuga] resolved merge conflicts with upstream/master
      6649b3b [Kamil Smuga] fix broken docs links to Python API
      92f03d7 [stderr] Fix links to pyspark api
      6ef48632
    • Jongyoul Lee's avatar
      [SPARK-6453][Mesos] Some Mesos*Suite have a different package with their classes · adb2ff75
      Jongyoul Lee authored
      - Moved Suites from o.a.s.s.mesos to o.a.s.s.cluster.mesos
      
      Author: Jongyoul Lee <jongyoul@gmail.com>
      
      Closes #5126 from jongyoul/SPARK-6453 and squashes the following commits:
      
      4f24a3e [Jongyoul Lee] [SPARK-6453][Mesos] Some Mesos*Suite have a different package with their classes - Fixed imports orders
      8ab149d [Jongyoul Lee] [SPARK-6453][Mesos] Some Mesos*Suite have a different package with their classes - Moved Suites from o.a.s.s.mesos to o.a.s.s.cluster.mesos
      adb2ff75
    • Hangchen Yu's avatar
      [SPARK-6455] [docs] Correct some mistakes and typos · ab4f516f
      Hangchen Yu authored
      Correct some typos. Correct a mistake in lib/PageRank.scala. The first PageRank implementation uses standalone Graph interface, but the second uses Pregel interface. It may mislead the code viewers.
      
      Author: Hangchen Yu <yuhc@gitcafe.com>
      
      Closes #5128 from yuhc/master and squashes the following commits:
      
      53e5432 [Hangchen Yu] Merge branch 'master' of https://github.com/yuhc/spark
      67b77b5 [Hangchen Yu] [SPARK-6455] [docs] Correct some mistakes and typos
      206f2dc [Hangchen Yu] Correct some mistakes and typos.
      ab4f516f
    • Ryan Williams's avatar
      [SPARK-6448] Make history server log parse exceptions · b9fe504b
      Ryan Williams authored
      This helped me to debug a parse error that was due to the event log format changing recently.
      
      Author: Ryan Williams <ryan.blake.williams@gmail.com>
      
      Closes #5122 from ryan-williams/histerror and squashes the following commits:
      
      5831656 [Ryan Williams] line length
      c3742ae [Ryan Williams] Make history server log parse exceptions
      b9fe504b
    • ypcat's avatar
      [SPARK-6408] [SQL] Fix JDBCRDD filtering string literals · 9b1e1f20
      ypcat authored
      Author: ypcat <ypcat6@gmail.com>
      Author: Pei-Lun Lee <pllee@appier.com>
      
      Closes #5087 from ypcat/spark-6408 and squashes the following commits:
      
      1becc16 [ypcat] [SPARK-6408] [SQL] styling
      1bc4455 [ypcat] [SPARK-6408] [SQL] move nested function outside
      e57fa4a [ypcat] [SPARK-6408] [SQL] fix test case
      245ab6f [ypcat] [SPARK-6408] [SQL] add test cases for filtering quoted strings
      8962534 [Pei-Lun Lee] [SPARK-6408] [SQL] Fix filtering string literals
      9b1e1f20
  4. Mar 21, 2015
Loading