Skip to content
Snippets Groups Projects
  1. May 03, 2016
    • Dongjoon Hyun's avatar
      [SPARK-15057][GRAPHX] Remove stale TODO comment for making `enum` in GraphGenerators · 46965cd0
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR removes a stale TODO comment in `GraphGenerators.scala`
      
      ## How was this patch tested?
      
      Just comment removed.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12839 from dongjoon-hyun/SPARK-15057.
      46965cd0
    • Sean Owen's avatar
      [SPARK-14897][CORE] Upgrade Jetty to latest version of 8 · 57ac7c18
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Update Jetty 8.1 to the latest 2016/02 release, from a 2013/10 release, for security and bug fixes. This does not resolve the JIRA necessarily, as it's still worth considering an update to 9.3.
      
      ## How was this patch tested?
      
      Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #12842 from srowen/SPARK-14897.
      57ac7c18
    • Reynold Xin's avatar
      [SPARK-15081] Move AccumulatorV2 and subclasses into util package · d557a5e0
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch moves AccumulatorV2 and subclasses into util package.
      
      ## How was this patch tested?
      Updated relevant tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12863 from rxin/SPARK-15081.
      d557a5e0
    • Dongjoon Hyun's avatar
      [SPARK-15053][BUILD] Fix Java Lint errors on Hive-Thriftserver module · a7444570
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This issue fixes or hides 181 Java linter errors introduced by SPARK-14987 which copied hive service code from Hive. We had better clean up these errors before releasing Spark 2.0.
      
      - Fix UnusedImports (15 lines), RedundantModifier (14 lines), SeparatorWrap (9 lines), MethodParamPad (6 lines), FileTabCharacter (5 lines), ArrayTypeStyle (3 lines), ModifierOrder (3 lines), RedundantImport (1 line), CommentsIndentation (1 line), UpperEll (1 line), FallThrough (1 line), OneStatementPerLine (1 line), NewlineAtEndOfFile (1 line) errors.
      - Ignore `LineLength` errors under `hive/service/*` (118 lines).
      - Ignore `MethodName` error in `PasswdAuthenticationProvider.java` (1 line).
      - Ignore `NoFinalizer` error in `ThreadWithGarbageCleanup.java` (1 line).
      
      ## How was this patch tested?
      
      After passing Jenkins building, run `dev/lint-java` manually.
      ```bash
      $ dev/lint-java
      Checkstyle checks passed.
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12831 from dongjoon-hyun/SPARK-15053.
      a7444570
    • Sandeep Singh's avatar
      [MINOR][DOCS] Fix type Information in Quick Start and Programming Guide · dfd9723d
      Sandeep Singh authored
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #12841 from techaddict/improve_docs_1.
      dfd9723d
    • Holden Karau's avatar
      [SPARK-6717][ML] Clear shuffle files after checkpointing in ALS · f10ae4b1
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      When ALS is run with a checkpoint interval, during the checkpoint materialize the current state and cleanup the previous shuffles (non-blocking).
      
      ## How was this patch tested?
      
      Existing ALS unit tests, new ALS checkpoint cleanup unit tests added & shuffle files checked after ALS w/checkpointing run.
      
      Author: Holden Karau <holden@us.ibm.com>
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #11919 from holdenk/SPARK-6717-clear-shuffle-files-after-checkpointing-in-ALS.
      f10ae4b1
    • Andrew Ray's avatar
      [SPARK-13749][SQL][FOLLOW-UP] Faster pivot implementation for many distinct... · d8f528ce
      Andrew Ray authored
      [SPARK-13749][SQL][FOLLOW-UP] Faster pivot implementation for many distinct values with two phase aggregation
      
      ## What changes were proposed in this pull request?
      
      This is a follow up PR for #11583. It makes 3 lazy vals into just vals and adds unit test coverage.
      
      ## How was this patch tested?
      
      Existing unit tests and additional unit tests.
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #12861 from aray/fast-pivot-follow-up.
      d8f528ce
  2. May 02, 2016
    • Reynold Xin's avatar
      [SPARK-15079] Support average/count/sum in Long/DoubleAccumulator · bb9ab56b
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch removes AverageAccumulator and adds the ability to compute average to LongAccumulator and DoubleAccumulator. The patch also improves documentation for the two accumulators.
      
      ## How was this patch tested?
      Added unit tests for this.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12858 from rxin/SPARK-15079.
      bb9ab56b
    • Marcin Tustin's avatar
      [SPARK-14685][CORE] Document heritability of localProperties · 8028f3a0
      Marcin Tustin authored
      ## What changes were proposed in this pull request?
      
      This updates the java-/scala- doc for setLocalProperty to document heritability of localProperties. This also adds tests for that behaviour.
      
      ## How was this patch tested?
      
      Tests pass. New tests were added.
      
      Author: Marcin Tustin <marcin.tustin@gmail.com>
      
      Closes #12455 from marcintustin/SPARK-14685.
      8028f3a0
    • Shixiong Zhu's avatar
      [SPARK-15077][SQL] Use a fair lock to avoid thread starvation in StreamExecution · 4e3685ae
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Right now `StreamExecution.awaitBatchLock` uses an unfair lock. `StreamExecution.awaitOffset` may run too long and fail some test because `StreamExecution.constructNextBatch` keeps getting the lock.
      
      See: https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.4/865/testReport/junit/org.apache.spark.sql.streaming/FileStreamSourceStressTestSuite/file_source_stress_test/
      
      This PR uses a fair ReentrantLock to resolve the thread starvation issue.
      
      ## How was this patch tested?
      
      Modified `FileStreamSourceStressTestSuite.test("file source stress test")` to run the test codes 100 times locally. It always fails because of timeout without this patch.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12852 from zsxwing/SPARK-15077.
      4e3685ae
    • bomeng's avatar
      [SPARK-15062][SQL] fix list type infer serializer issue · 0fd95be3
      bomeng authored
      ## What changes were proposed in this pull request?
      
      Make serializer correctly inferred if the input type is `List[_]`, since `List[_]` is type of `Seq[_]`, before it was matched to different case (`case t if definedByConstructorParams(t)`).
      
      ## How was this patch tested?
      
      New test case was added.
      
      Author: bomeng <bmeng@us.ibm.com>
      
      Closes #12849 from bomeng/SPARK-15062.
      0fd95be3
    • Herman van Hovell's avatar
      [SPARK-15047][SQL] Cleanup SQL Parser · 1c19c276
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      This PR addresses a few minor issues in SQL parser:
      
      - Removes some unused rules and keywords in the grammar.
      - Removes code path for fallback SQL parsing (was needed for Hive native parsing).
      - Use `UnresolvedGenerator` instead of hard-coding `Explode` & `JsonTuple`.
      - Adds a more generic way of creating error messages for unsupported Hive features.
      - Use `visitFunctionName` as much as possible.
      - Interpret a `CatalogColumn`'s `DataType` directly instead of parsing it again.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Herman van Hovell <hvanhovell@questtec.nl>
      
      Closes #12826 from hvanhovell/SPARK-15047.
      1c19c276
    • hyukjinkwon's avatar
      [SPARK-15050][SQL] Put CSV and JSON options as Python csv and json function parameters · d37c7f7f
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-15050
      
      This PR adds function parameters for Python API for reading and writing `csv()`.
      
      ## How was this patch tested?
      
      This was tested by `./dev/run_tests`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: Hyukjin Kwon <gurwls223@gmail.com>
      
      Closes #12834 from HyukjinKwon/SPARK-15050.
      d37c7f7f
    • Liwei Lin's avatar
      [SPARK-14747][SQL] Add assertStreaming/assertNoneStreaming checks in DataFrameWriter · 35d9c8aa
      Liwei Lin authored
      ## Problem
      
      If an end user happens to write code mixed with continuous-query-oriented methods and non-continuous-query-oriented methods:
      
      ```scala
      ctx.read
         .format("text")
         .stream("...")  // continuous query
         .write
         .text("...")    // non-continuous query; should be startStream() here
      ```
      
      He/she would get this somehow confusing exception:
      
      >
      Exception in thread "main" java.lang.AssertionError: assertion failed: No plan for FileSource[./continuous_query_test_input]
      	at scala.Predef$.assert(Predef.scala:170)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
      	at ...
      
      ## What changes were proposed in this pull request?
      
      This PR adds checks for continuous-query-oriented methods and non-continuous-query-oriented methods in `DataFrameWriter`:
      
      <table>
      <tr>
      	<td align="center"></td>
      	<td align="center"><strong>can be called on continuous query?</strong></td>
      	<td align="center"><strong>can be called on non-continuous query?</strong></td>
      </tr>
      <tr>
      	<td align="center">mode</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">trigger</td>
      	<td align="center">yes</td>
      	<td align="center"></td>
      </tr>
      <tr>
      	<td align="center">format</td>
      	<td align="center">yes</td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">option/options</td>
      	<td align="center">yes</td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">partitionBy</td>
      	<td align="center">yes</td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">bucketBy</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">sortBy</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">save</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">queryName</td>
      	<td align="center">yes</td>
      	<td align="center"></td>
      </tr>
      <tr>
      	<td align="center">startStream</td>
      	<td align="center">yes</td>
      	<td align="center"></td>
      </tr>
      <tr>
      	<td align="center">insertInto</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">saveAsTable</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">jdbc</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">json</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">parquet</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">orc</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">text</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      <tr>
      	<td align="center">csv</td>
      	<td align="center"></td>
      	<td align="center">yes</td>
      </tr>
      </table>
      
      After this PR's change, the friendly exception would be:
      >
      Exception in thread "main" org.apache.spark.sql.AnalysisException: text() can only be called on non-continuous queries;
      	at org.apache.spark.sql.DataFrameWriter.assertNotStreaming(DataFrameWriter.scala:678)
      	at org.apache.spark.sql.DataFrameWriter.text(DataFrameWriter.scala:629)
      	at ss.SSDemo$.main(SSDemo.scala:47)
      
      ## How was this patch tested?
      
      dedicated unit tests were added
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12521 from lw-lin/dataframe-writer-check.
      35d9c8aa
    • Herman van Hovell's avatar
      [SPARK-14785] [SQL] Support correlated scalar subqueries · f362363d
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      In this PR we add support for correlated scalar subqueries. An example of such a query is:
      ```SQL
      select * from tbl1 a where a.value > (select max(value) from tbl2 b where b.key = a.key)
      ```
      The implementation adds the `RewriteCorrelatedScalarSubquery` rule to the Optimizer. This rule plans these subqueries using `LEFT OUTER` joins. It currently supports rewrites for `Project`, `Aggregate` & `Filter` logical plans.
      
      I could not find a well defined semantics for the use of scalar subqueries in an `Aggregate`. The current implementation currently evaluates the scalar subquery *before* aggregation. This means that you either have to make scalar subquery part of the grouping expression, or that you have to aggregate it further on. I am open to suggestions on this.
      
      The implementation currently forces the uniqueness of a scalar subquery by enforcing that it is aggregated and that the resulting column is wrapped in an `AggregateExpression`.
      
      ## How was this patch tested?
      Added tests to `SubquerySuite`.
      
      Author: Herman van Hovell <hvanhovell@questtec.nl>
      
      Closes #12822 from hvanhovell/SPARK-14785.
      f362363d
    • poolis's avatar
      [SPARK-12928][SQL] Oracle FLOAT datatype is not properly handled when reading via JDBC · 917d05f4
      poolis authored
      The contribution is my original work and that I license the work to the project under the project's open source license.
      
      Author: poolis <gmichalopoulos@gmail.com>
      Author: Greg Michalopoulos <gmichalopoulos@gmail.com>
      
      Closes #10899 from poolis/spark-12928.
      917d05f4
    • Reynold Xin's avatar
      [SPARK-15052][SQL] Use builder pattern to create SparkSession · ca1b2198
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch creates a builder pattern for creating SparkSession. The new code is unused and mostly deadcode. I'm putting it up here for feedback.
      
      There are a few TODOs that can be done as follow-up pull requests:
      - [ ] Update tests to use this
      - [ ] Update examples to use this
      - [ ] Clean up SQLContext code w.r.t. this one (i.e. SparkSession shouldn't call into SQLContext.getOrCreate; it should be the other way around)
      - [ ] Remove SparkSession.withHiveSupport
      - [ ] Disable the old constructor (by making it private) so the only way to start a SparkSession is through this builder pattern
      
      ## How was this patch tested?
      Part of the future pull request is to clean this up and switch existing tests to use this.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12830 from rxin/sparksession-builder.
      ca1b2198
    • Reynold Xin's avatar
      [SPARK-15054] Deprecate old accumulator API · d5c79f56
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch deprecates the old accumulator API.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12832 from rxin/SPARK-15054.
      d5c79f56
    • Pete Robbins's avatar
      [SPARK-13745] [SQL] Support columnar in memory representation on Big Endian platforms · 8a1ce489
      Pete Robbins authored
      ## What changes were proposed in this pull request?
      
      parquet datasource and ColumnarBatch tests fail on big-endian platforms This patch adds support for the little-endian byte arrays being correctly interpreted on a big-endian platform
      
      ## How was this patch tested?
      
      Spark test builds ran on big endian z/Linux and regression build on little endian amd64
      
      Author: Pete Robbins <robbinspg@gmail.com>
      
      Closes #12397 from robbinspg/master.
      8a1ce489
    • Davies Liu's avatar
      [SPARK-14781] [SQL] support nested predicate subquery · 95e37214
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      In order to support nested predicate subquery, this PR introduce an internal join type ExistenceJoin, which will emit all the rows from left, plus an additional column, which presents there are any rows matched from right or not (it's not null-aware right now). This additional column could be used to replace the subquery in Filter.
      
      In theory, all the predicate subquery could use this join type, but it's slower than LeftSemi and LeftAnti, so it's only used for nested subquery (subquery inside OR).
      
      For example, the following SQL:
      ```sql
      SELECT a FROM t  WHERE EXISTS (select 0) OR EXISTS (select 1)
      ```
      
      This PR also fix a bug in predicate subquery push down through join (they should not).
      
      Nested null-aware subquery is still not supported. For example,   `a > 3 OR b NOT IN (select bb from t)`
      
      After this, we could run TPCDS query Q10, Q35, Q45
      
      ## How was this patch tested?
      
      Added unit tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12820 from davies/or_exists.
      95e37214
    • Dongjoon Hyun's avatar
      [SPARK-14830][SQL] Add RemoveRepetitionFromGroupExpressions optimizer. · 6e632012
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to optimize GroupExpressions by removing repeating expressions. `RemoveRepetitionFromGroupExpressions` is added.
      
      **Before**
      ```scala
      scala> sql("select a+1 from values 1,2 T(a) group by a+1, 1+a, A+1, 1+A").explain()
      == Physical Plan ==
      WholeStageCodegen
      :  +- TungstenAggregate(key=[(a#0 + 1)#6,(1 + a#0)#7,(A#0 + 1)#8,(1 + A#0)#9], functions=[], output=[(a + 1)#5])
      :     +- INPUT
      +- Exchange hashpartitioning((a#0 + 1)#6, (1 + a#0)#7, (A#0 + 1)#8, (1 + A#0)#9, 200), None
         +- WholeStageCodegen
            :  +- TungstenAggregate(key=[(a#0 + 1) AS (a#0 + 1)#6,(1 + a#0) AS (1 + a#0)#7,(A#0 + 1) AS (A#0 + 1)#8,(1 + A#0) AS (1 + A#0)#9], functions=[], output=[(a#0 + 1)#6,(1 + a#0)#7,(A#0 + 1)#8,(1 + A#0)#9])
            :     +- INPUT
            +- LocalTableScan [a#0], [[1],[2]]
      ```
      
      **After**
      ```scala
      scala> sql("select a+1 from values 1,2 T(a) group by a+1, 1+a, A+1, 1+A").explain()
      == Physical Plan ==
      WholeStageCodegen
      :  +- TungstenAggregate(key=[(a#0 + 1)#6], functions=[], output=[(a + 1)#5])
      :     +- INPUT
      +- Exchange hashpartitioning((a#0 + 1)#6, 200), None
         +- WholeStageCodegen
            :  +- TungstenAggregate(key=[(a#0 + 1) AS (a#0 + 1)#6], functions=[], output=[(a#0 + 1)#6])
            :     +- INPUT
            +- LocalTableScan [a#0], [[1],[2]]
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins tests (with a new testcase)
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12590 from dongjoon-hyun/SPARK-14830.
      6e632012
    • Shixiong Zhu's avatar
      [SPARK-14579][SQL] Fix the race condition in StreamExecution.processAllAvailable again · a35a67a8
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      #12339 didn't fix the race condition. MemorySinkSuite is still flaky: https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.2/814/testReport/junit/org.apache.spark.sql.streaming/MemorySinkSuite/registering_as_a_table/
      
      Here is an execution order to reproduce it.
      
      | Time        |Thread 1           | MicroBatchThread  |
      |:-------------:|:-------------:|:-----:|
      | 1 | |  `MemorySink.getOffset` |
      | 2 | |  availableOffsets ++= newData (availableOffsets is not changed here)  |
      | 3 | addData(newData)      |   |
      | 4 | Set `noNewData` to `false` in  processAllAvailable |  |
      | 5 | | `dataAvailable` returns `false`   |
      | 6 | | noNewData = true |
      | 7 | `noNewData` is true so just return | |
      | 8 |  assert results and fail | |
      | 9 |   | `dataAvailable` returns true so process the new batch |
      
      This PR expands the scope of `awaitBatchLock.synchronized` to eliminate the above race.
      
      ## How was this patch tested?
      
      test("stress test"). It always failed before this patch. And it will pass after applying this patch. Ignore this test in the PR as it takes several minutes to finish.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12582 from zsxwing/SPARK-14579-2.
      a35a67a8
    • Andrew Ray's avatar
      [SPARK-13749][SQL] Faster pivot implementation for many distinct values with two phase aggregation · 99274418
      Andrew Ray authored
      ## What changes were proposed in this pull request?
      
      The existing implementation of pivot translates into a single aggregation with one aggregate per distinct pivot value. When the number of distinct pivot values is large (say 1000+) this can get extremely slow since each input value gets evaluated on every aggregate even though it only affects the value of one of them.
      
      I'm proposing an alternate strategy for when there are 10+ (somewhat arbitrary threshold) distinct pivot values. We do two phases of aggregation. In the first we group by the grouping columns plus the pivot column and perform the specified aggregations (one or sometimes more). In the second aggregation we group by the grouping columns and use the new (non public) PivotFirst aggregate that rearranges the outputs of the first aggregation into an array indexed by the pivot value. Finally we do a project to extract the array entries into the appropriate output column.
      
      ## How was this patch tested?
      
      Additional unit tests in DataFramePivotSuite and manual larger scale testing.
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #11583 from aray/fast-pivot.
      99274418
    • Jeff Zhang's avatar
      [SPARK-14845][SPARK_SUBMIT][YARN] spark.files in properties file is n… · 0a302699
      Jeff Zhang authored
      ## What changes were proposed in this pull request?
      
      initialize SparkSubmitArgument#files first from spark-submit arguments then from properties file, so that sys property spark.yarn.dist.files will be set correctly.
      ```
      OptionAssigner(args.files, YARN, ALL_DEPLOY_MODES, sysProp = "spark.yarn.dist.files"),
      ```
      ## How was this patch tested?
      
      manul test. file defined in properties file is also distributed to driver in yarn-cluster mode.
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #12656 from zjffdu/SPARK-14845.
      0a302699
    • Wenchen Fan's avatar
      [SPARK-14637][SQL] object expressions cleanup · 0513c3ac
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Simplify and clean up some object expressions:
      
      1. simplify the logic to handle `propagateNull`
      2. add `propagateNull` parameter to `Invoke`
      3. simplify the unbox logic in `Invoke`
      4. other minor cleanup
      
      TODO: simplify `MapObjects`
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12399 from cloud-fan/object.
      0513c3ac
    • Ben McCann's avatar
      Fix reference to external metrics documentation · 214d1be4
      Ben McCann authored
      Author: Ben McCann <benjamin.j.mccann@gmail.com>
      
      Closes #12833 from benmccann/patch-1.
      214d1be4
  3. May 01, 2016
    • Reynold Xin's avatar
      [SPARK-15049] Rename NewAccumulator to AccumulatorV2 · 44da8d8e
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      NewAccumulator isn't the best name if we ever come up with v3 of the API.
      
      ## How was this patch tested?
      Updated tests to reflect the change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12827 from rxin/SPARK-15049.
      44da8d8e
    • hyukjinkwon's avatar
      [SPARK-13425][SQL] Documentation for CSV datasource options · a832cef1
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR adds the explanation and documentation for CSV options for reading and writing.
      
      ## How was this patch tested?
      
      Style tests with `./dev/run_tests` for documentation style.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: Hyukjin Kwon <gurwls223@gmail.com>
      
      Closes #12817 from HyukjinKwon/SPARK-13425.
      a832cef1
    • Xusen Yin's avatar
      [SPARK-14931][ML][PYTHON] Mismatched default values between pipelines in Spark and PySpark - update · a6428292
      Xusen Yin authored
      ## What changes were proposed in this pull request?
      
      This PR is an update for [https://github.com/apache/spark/pull/12738] which:
      * Adds a generic unit test for JavaParams wrappers in pyspark.ml for checking default Param values vs. the defaults in the Scala side
      * Various fixes for bugs found
        * This includes changing classes taking weightCol to treat unset and empty String Param values the same way.
      
      Defaults changed:
      * Scala
       * LogisticRegression: weightCol defaults to not set (instead of empty string)
       * StringIndexer: labels default to not set (instead of empty array)
       * GeneralizedLinearRegression:
         * maxIter always defaults to 25 (simpler than defaulting to 25 for a particular solver)
         * weightCol defaults to not set (instead of empty string)
       * LinearRegression: weightCol defaults to not set (instead of empty string)
      * Python
       * MultilayerPerceptron: layers default to not set (instead of [1,1])
       * ChiSqSelector: numTopFeatures defaults to 50 (instead of not set)
      
      ## How was this patch tested?
      
      Generic unit test.  Manually tested that unit test by changing defaults and verifying that broke the test.
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      Author: yinxusen <yinxusen@gmail.com>
      
      Closes #12816 from jkbradley/yinxusen-SPARK-14931.
      a6428292
    • Allen's avatar
      [SPARK-14505][CORE] Fix bug : creating two SparkContext objects in the same... · cdf9e975
      Allen authored
      [SPARK-14505][CORE] Fix bug : creating two SparkContext objects in the same jvm, the first one will can not run any task!
      
      After creating two SparkContext objects in the same jvm(the second one can not be created successfully!),
      use the first one to run job will throw exception like below:
      
      ![image](https://cloud.githubusercontent.com/assets/7162889/14402832/0c8da2a6-fe73-11e5-8aba-68ee3ddaf605.png)
      
      Author: Allen <yufan_1990@163.com>
      
      Closes #12273 from the-sea/context-create-bug.
      cdf9e975
  4. Apr 30, 2016
    • Wenchen Fan's avatar
      [SPARK-15033][SQL] fix a flaky test in CachedTableSuite · 90787de8
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      This is caused by https://github.com/apache/spark/pull/12776, which removes the `synchronized` from all methods in `AccumulatorContext`.
      
      However, a test in `CachedTableSuite` synchronize on `AccumulatorContext` and expecting no one else can change it, which is not true anymore.
      
      This PR update that test to not require to lock on `AccumulatorContext`.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12811 from cloud-fan/flaky.
      90787de8
    • Hossein's avatar
      [SPARK-14143] Options for parsing NaNs, Infinity and nulls for numeric types · 507bea5c
      Hossein authored
      1. Adds the following options for parsing NaNs: nanValue
      
      2. Adds the following options for parsing infinity: positiveInf, negativeInf.
      
      `TypeCast.castTo` is unit tested and an end-to-end test is added to `CSVSuite`
      
      Author: Hossein <hossein@databricks.com>
      
      Closes #11947 from falaki/SPARK-14143.
      507bea5c
    • Yin Huai's avatar
      [SPARK-15034][SPARK-15035][SPARK-15036][SQL] Use spark.sql.warehouse.dir as the warehouse location · 0182d959
      Yin Huai authored
      This PR contains three changes:
      1. We will use spark.sql.warehouse.dir set warehouse location. We will not use hive.metastore.warehouse.dir.
      2. SessionCatalog needs to set the location to default db. Otherwise, when creating a table in SparkSession without hive support, the default db's path will be an empty string.
      3. When we create a database, we need to make the path qualified.
      
      Existing tests and new tests
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #12812 from yhuai/warehouse.
      0182d959
    • Yanbo Liang's avatar
      [SPARK-15030][ML][SPARKR] Support formula in spark.kmeans in SparkR · 19a6d192
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      * ```RFormula``` supports empty response variable like ```~ x + y```.
      * Support formula in ```spark.kmeans``` in SparkR.
      * Fix some outdated docs for SparkR.
      
      ## How was this patch tested?
      Unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12813 from yanboliang/spark-15030.
      19a6d192
    • Herman van Hovell's avatar
      [SPARK-14952][CORE][ML] Remove methods that were deprecated in 1.6.0 · e5fb78ba
      Herman van Hovell authored
      #### What changes were proposed in this pull request?
      
      This PR removes three methods the were deprecated in 1.6.0:
      - `PortableDataStream.close()`
      - `LinearRegression.weights`
      - `LogisticRegression.weights`
      
      The rationale for doing this is that the impact is small and that Spark 2.0 is a major release.
      
      #### How was this patch tested?
      Compilation succeded.
      
      Author: Herman van Hovell <hvanhovell@questtec.nl>
      
      Closes #12732 from hvanhovell/SPARK-14952.
      e5fb78ba
    • Xiangrui Meng's avatar
      [SPARK-14653][ML] Remove json4s from mllib-local · 0847fe4e
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      This PR moves Vector.toJson/fromJson to ml.linalg.VectorEncoder under mllib/ to keep mllib-local's dependency minimal. The json encoding is used by Params. So we still need this feature in SPARK-14615, where we will switch to ml.linalg in spark.ml APIs.
      
      ## How was this patch tested?
      
      Copied existing unit tests over.
      
      cc; dbtsai
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #12802 from mengxr/SPARK-14653.
      0847fe4e
    • Junyang's avatar
      [SPARK-13289][MLLIB] Fix infinite distances between word vectors in Word2VecModel · 1192fe4c
      Junyang authored
      ## What changes were proposed in this pull request?
      
      This PR fixes the bug that generates infinite distances between word vectors. For example,
      
      Before this PR, we have
      ```
      val synonyms = model.findSynonyms("who", 40)
      ```
      will give the following results:
      ```
      to Infinity
      and Infinity
      that Infinity
      with Infinity
      ```
      With this PR, the distance between words is a value between 0 and 1, as follows:
      ```
      scala> model.findSynonyms("who", 10)
      res0: Array[(String, Double)] = Array((Harvard-educated,0.5253688097000122), (ex-SAS,0.5213794708251953), (McMutrie,0.5187736749649048), (fellow,0.5166833400726318), (businessman,0.5145374536514282), (American-born,0.5127736330032349), (British-born,0.5062344074249268), (gray-bearded,0.5047978162765503), (American-educated,0.5035858750343323), (mentored,0.49849334359169006))
      
      scala> model.findSynonyms("king", 10)
      res1: Array[(String, Double)] = Array((queen,0.6787897944450378), (prince,0.6786158084869385), (monarch,0.659771203994751), (emperor,0.6490438580513), (goddess,0.643266499042511), (dynasty,0.635733425617218), (sultan,0.6166239380836487), (pharaoh,0.6150713562965393), (birthplace,0.6143025159835815), (empress,0.6109727025032043))
      
      scala> model.findSynonyms("queen", 10)
      res2: Array[(String, Double)] = Array((princess,0.7670737504959106), (godmother,0.6982434988021851), (raven-haired,0.6877717971801758), (swan,0.684934139251709), (hunky,0.6816608309745789), (Titania,0.6808111071586609), (heroine,0.6794036030769348), (king,0.6787897944450378), (diva,0.67848801612854), (lip-synching,0.6731793284416199))
      ```
      
      ### There are two places changed in this PR:
      - Normalize the word vector to avoid overflow when calculating inner product between word vectors. This also simplifies the distance calculation, since the word vectors only need to be normalized once.
      - Scale the learning rate by number of iteration, to be consistent with Google Word2Vec implementation
      
      ## How was this patch tested?
      
      Use word2vec to train text corpus, and run model.findSynonyms() to get the distances between word vectors.
      
      Author: Junyang <fly.shenjy@gmail.com>
      Author: flyskyfly <fly.shenjy@gmail.com>
      
      Closes #11812 from flyjy/TVec.
      1192fe4c
    • pshearer's avatar
      [SPARK-13973][PYSPARK] Make pyspark fail noisily if IPYTHON or IPYTHON_OPTS are set · 0368ff30
      pshearer authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-13973
      
      Following discussion with srowen the IPYTHON and IPYTHON_OPTS variables are removed. If they are set in the user's environment, pyspark will not execute and prints an error message. Failing noisily will force users to remove these options and learn the new configuration scheme, which is much more sustainable and less confusing.
      
      ## How was this patch tested?
      
      Manual testing; set IPYTHON=1 and verified that the error message prints.
      
      Author: pshearer <pshearer@massmutual.com>
      Author: shearerp <shearerp@umich.edu>
      
      Closes #12528 from shearerp/master.
      0368ff30
    • Reynold Xin's avatar
      [SPARK-15028][SQL] Remove HiveSessionState.setDefaultOverrideConfs · 8dc3987d
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch removes some code that are no longer relevant -- mainly HiveSessionState.setDefaultOverrideConfs.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12806 from rxin/SPARK-15028.
      8dc3987d
    • Xiangrui Meng's avatar
      [SPARK-14831][.2][ML][R] rename ml.save/ml.load to write.ml/read.ml · b3ea5793
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      Continue the work of #12789 to rename ml.asve/ml.load to write.ml/read.ml, which are more consistent with read.df/write.df and other methods in SparkR.
      
      I didn't rename `data` to `df` because we still use `predict` for prediction, which uses `newData` to match the signature in R.
      
      ## How was this patch tested?
      
      Existing unit tests.
      
      cc: yanboliang thunterdb
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #12807 from mengxr/SPARK-14831.
      b3ea5793
Loading