Skip to content
Snippets Groups Projects
  1. Mar 25, 2016
    • Davies Liu's avatar
      [SPARK-13919] [SQL] fix column pruning through filter · 6603d9f7
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      This PR fix the conflict between ColumnPruning and PushPredicatesThroughProject, because ColumnPruning will try to insert a Project before Filter, but PushPredicatesThroughProject will move the Filter before Project.This is fixed by remove the Project before Filter, if the Project only do column pruning.
      
      The RuleExecutor will fail the test if reached max iterations.
      
      Closes #11745
      
      ## How was this patch tested?
      
      Existing tests.
      
      This is a test case still failing, disabled for now, will be fixed by https://issues.apache.org/jira/browse/SPARK-14137
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #11828 from davies/fail_rule.
      6603d9f7
    • Holden Karau's avatar
      [SPARK-13887][PYTHON][TRIVIAL][BUILD] Make lint-python script fail fast · 55a60576
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      Change lint python script to stop on first error rather than building them up so its clearer why we failed (requested by rxin). Also while in the file, remove the commented out code.
      
      ## How was this patch tested?
      
      Manually ran lint-python script with & without pep8 errors locally and verified expected results.
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #11898 from holdenk/SPARK-13887-pylint-fast-fail.
      55a60576
    • Wenchen Fan's avatar
      [SPARK-13456][SQL][FOLLOW-UP] lazily generate the outer pointer for case class defined in REPL · e9b6e7d8
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      In https://github.com/apache/spark/pull/11410, we missed a corner case: define the inner class and use it in `Dataset` at the same time by using paste mode. For this case, the inner class and the `Dataset` are inside same line object, when we build the `Dataset`, we try to get outer pointer from line object, and it will fail because the line object is not initialized yet.
      
      https://issues.apache.org/jira/browse/SPARK-13456?focusedCommentId=15209174&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15209174 is an example for this corner case.
      
      This PR make the process of getting outer pointer from line object lazy, so that we can successfully build the `Dataset` and finish initializing the line object.
      
      ## How was this patch tested?
      
      new test in repl suite.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11931 from cloud-fan/repl.
      e9b6e7d8
    • Reynold Xin's avatar
      [SPARK-14149] Log exceptions in tryOrIOException · 70a6f0bb
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      We ran into a problem today debugging some class loading problem during deserialization, and JVM was masking the underlying exception which made it very difficult to debug. We can however log the exceptions using try/catch ourselves in serialization/deserialization. The good thing is that all these methods are already using Utils.tryOrIOException, so we can just put the try catch and logging in a single place.
      
      ## How was this patch tested?
      A logging change with a manual test.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #11951 from rxin/SPARK-14149.
      70a6f0bb
    • Andrew Or's avatar
      [SPARK-14014][SQL] Integrate session catalog (attempt #2) · 20ddf5fd
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      This reopens #11836, which was merged but promptly reverted because it introduced flaky Hive tests.
      
      ## How was this patch tested?
      
      See `CatalogTestCases`, `SessionCatalogSuite` and `HiveContextSuite`.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #11938 from andrewor14/session-catalog-again.
      20ddf5fd
    • Reynold Xin's avatar
      [SPARK-14145][SQL] Remove the untyped version of Dataset.groupByKey · 1c70b765
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      Dataset has two variants of groupByKey, one for untyped and the other for typed. It actually doesn't make as much sense to have an untyped API here, since apps that want to use untyped APIs should just use the groupBy "DataFrame" API.
      
      ## How was this patch tested?
      This patch removes a method, and removes the associated tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #11949 from rxin/SPARK-14145.
      1c70b765
    • Reynold Xin's avatar
      [SPARK-14142][SQL] Replace internal use of unionAll with union · 3619fec1
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      unionAll has been deprecated in SPARK-14088.
      
      ## How was this patch tested?
      Should be covered by all existing tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #11946 from rxin/SPARK-14142.
      3619fec1
    • Yanbo Liang's avatar
      [SPARK-13010][ML][SPARKR] Implement a simple wrapper of AFTSurvivalRegression in SparkR · 13cbb2de
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      This PR continues the work in #11447, we implemented the wrapper of ```AFTSurvivalRegression``` named ```survreg``` in SparkR.
      
      ## How was this patch tested?
      Test against output from R package survival's survreg.
      
      cc mengxr felixcheung
      
      Close #11447
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #11932 from yanboliang/spark-13010-new.
      13cbb2de
  2. Mar 24, 2016
    • gatorsmile's avatar
      [SPARK-13957][SQL] Support Group By Ordinal in SQL · 05f652d6
      gatorsmile authored
      #### What changes were proposed in this pull request?
      This PR is to support group by position in SQL. For example, when users input the following query
      ```SQL
      select c1 as a, c2, c3, sum(*) from tbl group by 1, 3, c4
      ```
      The ordinals are recognized as the positions in the select list. Thus, `Analyzer` converts it to
      ```SQL
      select c1, c2, c3, sum(*) from tbl group by c1, c3, c4
      ```
      
      This is controlled by the config option `spark.sql.groupByOrdinal`.
      - When true, the ordinal numbers in group by clauses are treated as the position in the select list.
      - When false, the ordinal numbers are ignored.
      - Only convert integer literals (not foldable expressions). If found foldable expressions, ignore them.
      - When the positions specified in the group by clauses correspond to the aggregate functions in select list, output an exception message.
      - star is not allowed to use in the select list when users specify ordinals in group by
      
      Note: This PR is taken from https://github.com/apache/spark/pull/10731. When merging this PR, please give the credit to zhichao-li
      
      Also cc all the people who are involved in the previous discussion:  rxin cloud-fan marmbrus yhuai hvanhovell adrian-wang chenghao-intel tejasapatil
      
      #### How was this patch tested?
      
      Added a few test cases for both positive and negative test cases.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      Author: xiaoli <lixiao1983@gmail.com>
      Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local>
      
      Closes #11846 from gatorsmile/groupByOrdinal.
      05f652d6
    • GayathriMurali's avatar
      [SPARK-13949][ML][PYTHON] PySpark ml DecisionTreeClassifier, Regressor support export/import · 0874ff3a
      GayathriMurali authored
      ## What changes were proposed in this pull request?
      
      Added MLReadable and MLWritable to Decision Tree Classifier and Regressor. Added doctests.
      
      ## How was this patch tested?
      
      Python Unit tests. Tests added to check persistence in DecisionTreeClassifier and DecisionTreeRegressor.
      
      Author: GayathriMurali <gayathri.m.softie@gmail.com>
      
      Closes #11892 from GayathriMurali/SPARK-13949.
      0874ff3a
    • sethah's avatar
      [SPARK-14107][PYSPARK][ML] Add seed as named argument to GBTs in pyspark · 58509771
      sethah authored
      ## What changes were proposed in this pull request?
      
      GBTs in pyspark previously had seed parameters, but they could not be passed as keyword arguments through the class constructor. This patch adds seed as a keyword argument and also sets default value.
      
      ## How was this patch tested?
      
      Doc tests were updated to pass a random seed through the GBTClassifier and GBTRegressor constructors.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #11944 from sethah/SPARK-14107.
      58509771
    • Josh Rosen's avatar
      [SPARK-13980] Incrementally serialize blocks while unrolling them in MemoryStore · fdd460f5
      Josh Rosen authored
      When a block is persisted in the MemoryStore at a serialized storage level, the current MemoryStore.putIterator() code will unroll the entire iterator as Java objects in memory, then will turn around and serialize an iterator obtained from the unrolled array. This is inefficient and doubles our peak memory requirements.
      
      Instead, I think that we should incrementally serialize blocks while unrolling them.
      
      A downside to incremental serialization is the fact that we will need to deserialize the partially-unrolled data in case there is not enough space to unroll the block and the block cannot be dropped to disk. However, I'm hoping that the memory efficiency improvements will outweigh any performance losses as a result of extra serialization in that hopefully-rare case.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11791 from JoshRosen/serialize-incrementally.
      fdd460f5
    • Xusen Yin's avatar
      [SPARK-11871] Add save/load for MLPC · 2cf46d5a
      Xusen Yin authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-11871
      
      Add save/load for MLPC
      
      ## How was this patch tested?
      
      Test with Scala unit test
      
      Author: Xusen Yin <yinxusen@gmail.com>
      
      Closes #9854 from yinxusen/SPARK-11871.
      2cf46d5a
    • Xin Ren's avatar
      [SPARK-13017][DOCS] Replace example code in mllib-feature-extraction.md using include_example · d283223a
      Xin Ren authored
      Replace example code in mllib-feature-extraction.md using include_example
      https://issues.apache.org/jira/browse/SPARK-13017
      
      The example code in the user guide is embedded in the markdown and hence it is not easy to test. It would be nice to automatically test them. This JIRA is to discuss options to automate example code testing and see what we can do in Spark 1.6.
      
      Goal is to move actual example code to spark/examples and test compilation in Jenkins builds. Then in the markdown, we can reference part of the code to show in the user guide. This requires adding a Jekyll tag that is similar to https://github.com/jekyll/jekyll/blob/master/lib/jekyll/tags/include.rb, e.g., called include_example.
      `{% include_example scala/org/apache/spark/examples/mllib/TFIDFExample.scala %}`
      Jekyll will find `examples/src/main/scala/org/apache/spark/examples/mllib/TFIDFExample.scala` and pick code blocks marked "example" and replace code block in
      `{% highlight %}`
       in the markdown.
      
      See more sub-tasks in parent ticket: https://issues.apache.org/jira/browse/SPARK-11337
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #11142 from keypointt/SPARK-13017.
      d283223a
    • Sean Owen's avatar
      Revert "[SPARK-2208] Fix for local metrics tests can fail on fast machines".... · 342079dc
      Sean Owen authored
      Revert "[SPARK-2208] Fix for local metrics tests can fail on fast machines". The test appears to still be flaky after this change, or more flaky.
      
      This reverts commit 5519760e.
      342079dc
    • Joan's avatar
      [SPARK-2208] Fix for local metrics tests can fail on fast machines · 5519760e
      Joan authored
      ## What changes were proposed in this pull request?
      
      A fix for local metrics tests that can fail on fast machines.
      This is probably what is suggested here #3380 by aarondav?
      
      ## How was this patch tested?
      
      CI Tests
      
      Cheers
      
      Author: Joan <joan@goyeau.com>
      
      Closes #11747 from joan38/SPARK-2208-Local-metrics-tests.
      5519760e
    • Xin Ren's avatar
      [SPARK-13019][DOCS] fix for scala-2.10 build: Replace example code in... · dd9ca7b9
      Xin Ren authored
      [SPARK-13019][DOCS] fix for scala-2.10 build: Replace example code in mllib-statistics.md using include_example
      
      ## What changes were proposed in this pull request?
      
      This PR for ticket SPARK-13019 is based on previous PR(https://github.com/apache/spark/pull/11108).
      Since PR(https://github.com/apache/spark/pull/11108) is breaking scala-2.10 build, more work is needed to fix build errors.
      
      What I did new in this PR is adding keyword argument for 'fractions':
      `    val approxSample = data.sampleByKey(withReplacement = false, fractions = fractions)`
      `    val exactSample = data.sampleByKeyExact(withReplacement = false, fractions = fractions)`
      
      I reopened ticket on JIRA but sorry I don't know how to reopen a GitHub pull request, so I just submitting a new pull request.
      ## How was this patch tested?
      
      Manual build testing on local machine, build based on scala-2.10.
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #11901 from keypointt/SPARK-13019.
      dd9ca7b9
    • Ruifeng Zheng's avatar
      [SPARK-14030][MLLIB] Add parameter check to MLLIB · 048a7594
      Ruifeng Zheng authored
      ## What changes were proposed in this pull request?
      
      add parameter verification to MLLIB, like
      numCorrections > 0
      tolerance >= 0
      iters > 0
      regParam >= 0
      
      ## How was this patch tested?
      
      manual tests
      
      Author: Ruifeng Zheng <ruifengz@foxmail.com>
      Author: Zheng RuiFeng <mllabs@datanode1.(none)>
      Author: mllabs <mllabs@datanode1.(none)>
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #11852 from zhengruifeng/lbfgs_check.
      048a7594
    • Juarez Bochi's avatar
      Fix typo in ALS.scala · 1803bf63
      Juarez Bochi authored
      ## What changes were proposed in this pull request?
      
      Just a typo
      
      ## How was this patch tested?
      
      N/A
      
      Author: Juarez Bochi <jbochi@gmail.com>
      
      Closes #11896 from jbochi/patch-1.
      1803bf63
    • Tejas Patil's avatar
      [SPARK-14110][CORE] PipedRDD to print the command ran on non zero exit · 01849da0
      Tejas Patil authored
      ## What changes were proposed in this pull request?
      
      In case of failure in subprocess launched in PipedRDD, the failure exception reads “Subprocess exited with status XXX”. Debugging this is not easy for users especially if there are multiple pipe() operations in the Spark application.
      
      Changes done:
      - Changed the exception message when non-zero exit code is seen
      - If the reader and writer threads see exception, simply logging the command ran. The current model is to propagate the exception "as is" so that upstream Spark logic will take the right action based on what the exception was (eg. for fetch failure, it needs to retry; but for some fatal exception, it will decide to fail the stage / job). So wrapping the exception with a generic exception will not work. Altering the exception message will keep that guarantee but that is ugly (plus not all exceptions might have a constructor for a string message)
      
      ## How was this patch tested?
      
      - Added a new test case
      - Ran all existing tests for PipedRDD
      
      Author: Tejas Patil <tejasp@fb.com>
      
      Closes #11927 from tejasapatil/SPARK-14110-piperdd-failure.
      01849da0
    • Andrew Or's avatar
      c44d140c
  3. Mar 23, 2016
    • Joseph K. Bradley's avatar
      [SPARK-12183][ML][MLLIB] Remove mllib tree implementation, and wrap spark.ml one · cf823bea
      Joseph K. Bradley authored
      Primary change:
      * Removed spark.mllib.tree.DecisionTree implementation of tree and forest learning.
      * spark.mllib now calls the spark.ml implementation.
      * Moved unit tests (of tree learning internals) from spark.mllib to spark.ml as needed.
      
      ml.tree.DecisionTreeModel
      * Added toOld and made ```private[spark]```, implemented for Classifier and Regressor in subclasses.  These methods now use OldInformationGainStats.invalidInformationGainStats for LeafNodes in order to mimic the spark.mllib implementation.
      
      ml.tree.Node
      * Added ```private[tree] def deepCopy```, used by unit tests
      
      Copied developer comments from spark.mllib implementation to spark.ml one.
      
      Moving unit tests
      * Tree learning internals were tested by spark.mllib.tree.DecisionTreeSuite, or spark.mllib.tree.RandomForestSuite.
      * Those tests were all moved to spark.ml.tree.impl.RandomForestSuite.  The order in the file + the test names are the same, so you should be able to compare them by opening them in 2 windows side-by-side.
      * I made minimal changes to each test to allow it to run.  Each test makes the same checks as before, except for a few removed assertions which were checking irrelevant values.
      * No new unit tests were added.
      * mllib.tree.DecisionTreeSuite: I removed some checks of splits and bins which were not relevant to the unit tests they were in.  Those same split calculations were already being tested in other unit tests, for each dataset type.
      
      **Changes of behavior** (to be noted in SPARK-13448 once this PR is merged)
      * spark.ml.tree.impl.RandomForest: Rather than throwing an error when maxMemoryInMB is set to too small a value (to split any node), we now allow 1 node to be split, even if its memory requirements exceed maxMemoryInMB.  This involved removing the maxMemoryPerNode check in RandomForest.run, as well as modifying selectNodesToSplit().  Once this PR is merged, I will note the change of behavior on SPARK-13448.
      * spark.mllib.tree.DecisionTree: When a tree only has one node (root = leaf node), the "stats" field will now be empty, rather than being set to InformationGainStats.invalidInformationGainStats.  This does not remove information from the tree, and it will save a bit of storage.
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #11855 from jkbradley/remove-mllib-tree-impl.
      cf823bea
    • gatorsmile's avatar
      [SPARK-14085][SQL] Star Expansion for Hash · f42eaf42
      gatorsmile authored
      #### What changes were proposed in this pull request?
      
      This PR is to support star expansion in hash. For example,
      ```SQL
      val structDf = testData2.select("a", "b").as("record")
      structDf.select(hash($"*")
      ```
      
      In addition, it refactors the codes for the rule `ResolveStar` and fixes a regression for star expansion in group by when using SQL API. For example,
      ```SQL
      SELECT * FROM testData2 group by a, b
      ```
      
      cc cloud-fan Now, the code for star resolution is much cleaner. The coverage is better. Could you check if this refactoring is good? Thanks!
      
      #### How was this patch tested?
      Added a few test cases to cover it.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #11904 from gatorsmile/starResolution.
      f42eaf42
    • Liwei Lin's avatar
      [SPARK-14025][STREAMING][WEBUI] Fix streaming job descriptions on the event timeline · de4e48b6
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      Removed the extra `<a href=...>...</a>` for each streaming job's description on the event timeline.
      
      ### [Before]
      ![before](https://cloud.githubusercontent.com/assets/15843379/13898653/0a6c1838-ee13-11e5-9761-14bb7b114c13.png)
      
      ### [After]
      ![after](https://cloud.githubusercontent.com/assets/15843379/13898650/012b8808-ee13-11e5-92a6-64aff0799c83.png)
      
      ## How was this patch tested?
      
      test suits, manual checks (see screenshots above)
      
      Author: Liwei Lin <proflin.me@gmail.com>
      Author: proflin <proflin.me@gmail.com>
      
      Closes #11845 from lw-lin/description-event-line.
      de4e48b6
    • sethah's avatar
      [SPARK-13952][ML] Add random seed to GBT · 69bc2c17
      sethah authored
      ## What changes were proposed in this pull request?
      
      `GBTClassifier` and `GBTRegressor` should use random seed for reproducible results. Because of the nature of current unit tests, which compare GBTs in ML and GBTs in MLlib for equality, I also added a random seed to MLlib GBT algorithm. I made alternate constructors in `mllib.tree.GradientBoostedTrees` to accept a random seed, but left them as private so as to not change the API unnecessarily.
      
      ## How was this patch tested?
      
      Existing unit tests verify that functionality did not change. Other ML algorithms do not seem to have unit tests that directly test the functionality of random seeding, but reproducibility with seeding for GBTs is effectively verified in existing tests. I can add more tests if needed.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #11903 from sethah/SPARK-13952.
      69bc2c17
    • Andrew Or's avatar
      [SPARK-14014][SQL] Replace existing catalog with SessionCatalog · 5dfc0197
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      `SessionCatalog`, introduced in #11750, is a catalog that keeps track of temporary functions and tables, and delegates metastore operations to `ExternalCatalog`. This functionality overlaps a lot with the existing `analysis.Catalog`.
      
      As of this commit, `SessionCatalog` and `ExternalCatalog` will no longer be dead code. There are still things that need to be done after this patch, namely:
      - SPARK-14013: Properly implement temporary functions in `SessionCatalog`
      - SPARK-13879: Decide which DDL/DML commands to support natively in Spark
      - SPARK-?????: Implement the ones we do want to support through `SessionCatalog`.
      - SPARK-?????: Merge SQL/HiveContext
      
      ## How was this patch tested?
      
      This is largely a refactoring task so there are no new tests introduced. The particularly relevant tests are `SessionCatalogSuite` and `ExternalCatalogSuite`.
      
      Author: Andrew Or <andrew@databricks.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #11836 from andrewor14/use-session-catalog.
      5dfc0197
    • Michael Armbrust's avatar
      [SPARK-14078] Streaming Parquet Based FileSink · 6bc4be64
      Michael Armbrust authored
      This PR adds a new `Sink` implementation that writes out Parquet files.  In order to correctly handle partial failures while maintaining exactly once semantics, the files for each batch are written out to a unique directory and then atomically appended to a metadata log.  When a parquet based `DataSource` is initialized for reading, we first check for this log directory and use it instead of file listing when present.
      
      Unit tests are added, as well as a stress test that checks the answer after non-deterministic injected failures.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #11897 from marmbrus/fileSink.
      6bc4be64
    • Herman van Hovell's avatar
      [SPARK-13325][SQL] Create a 64-bit hashcode expression · 919bf321
      Herman van Hovell authored
      This PR introduces a 64-bit hashcode expression. Such an expression is especially usefull for HyperLogLog++ and other probabilistic datastructures.
      
      I have implemented xxHash64 which is a 64-bit hashing algorithm created by Yann Colet and Mathias Westerdahl. This is a high speed (C implementation runs at memory bandwidth) and high quality hashcode. It exploits both Instruction Level Parralellism (for speed) and the multiplication and rotation techniques (for quality) like MurMurHash does.
      
      The initial results are promising. I have added a CG'ed test to the `HashBenchmark`, and this results in the following results (running from SBT):
      
          Running benchmark: Hash For simple
            Running case: interpreted version
            Running case: codegen version
            Running case: codegen version 64-bit
      
          Intel(R) Core(TM) i7-4750HQ CPU  2.00GHz
          Hash For simple:                    Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
          -------------------------------------------------------------------------------------------
          interpreted version                      1011 / 1016        132.8           7.5       1.0X
          codegen version                          1864 / 1869         72.0          13.9       0.5X
          codegen version 64-bit                   1614 / 1644         83.2          12.0       0.6X
      
          Running benchmark: Hash For normal
            Running case: interpreted version
            Running case: codegen version
            Running case: codegen version 64-bit
      
          Intel(R) Core(TM) i7-4750HQ CPU  2.00GHz
          Hash For normal:                    Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
          -------------------------------------------------------------------------------------------
          interpreted version                      2467 / 2475          0.9        1176.1       1.0X
          codegen version                          2008 / 2115          1.0         957.5       1.2X
          codegen version 64-bit                    728 /  758          2.9         347.0       3.4X
      
          Running benchmark: Hash For array
            Running case: interpreted version
            Running case: codegen version
            Running case: codegen version 64-bit
      
          Intel(R) Core(TM) i7-4750HQ CPU  2.00GHz
          Hash For array:                     Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
          -------------------------------------------------------------------------------------------
          interpreted version                      1544 / 1707          0.1       11779.6       1.0X
          codegen version                          2728 / 2745          0.0       20815.5       0.6X
          codegen version 64-bit                   2508 / 2549          0.1       19132.8       0.6X
      
          Running benchmark: Hash For map
            Running case: interpreted version
            Running case: codegen version
            Running case: codegen version 64-bit
      
          Intel(R) Core(TM) i7-4750HQ CPU  2.00GHz
          Hash For map:                       Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
          -------------------------------------------------------------------------------------------
          interpreted version                      1819 / 1826          0.0      444014.3       1.0X
          codegen version                           183 /  194          0.0       44642.9       9.9X
          codegen version 64-bit                    173 /  174          0.0       42120.9      10.5X
      
      This shows that algorithm is consistently faster than MurMurHash32 in all cases and up to 3x (!) in the normal case.
      
      I have also added this to HyperLogLog++ and it cuts the processing time of the following code in half:
      
          val df = sqlContext.range(1<<25).agg(approxCountDistinct("id"))
          df.explain()
          val t = System.nanoTime()
          df.show()
          val ns = System.nanoTime() - t
      
          // Before
          ns: Long = 5821524302
      
          // After
          ns: Long = 2836418963
      
      cc cloud-fan (you have been working on hashcodes) / rxin
      
      Author: Herman van Hovell <hvanhovell@questtec.nl>
      
      Closes #11209 from hvanhovell/xxHash.
      919bf321
    • Tathagata Das's avatar
      [SPARK-13809][SQL] State store for streaming aggregations · 8c826880
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      In this PR, I am implementing a new abstraction for management of streaming state data - State Store. It is a key-value store for persisting running aggregates for aggregate operations in streaming dataframes. The motivation and design is discussed here.
      
      https://docs.google.com/document/d/1-ncawFx8JS5Zyfq1HAEGBx56RDet9wfVp_hDM8ZL254/edit#
      
      ## How was this patch tested?
      - [x] Unit tests
      - [x] Cluster tests
      
      **Coverage from unit tests**
      
      <img width="952" alt="screen shot 2016-03-21 at 3 09 40 pm" src="https://cloud.githubusercontent.com/assets/663212/13935872/fdc8ba86-ef76-11e5-93e8-9fa310472c7b.png">
      
      ## TODO
      - [x] Fix updates() iterator to avoid duplicate updates for same key
      - [x] Use Coordinator in ContinuousQueryManager
      - [x] Plugging in hadoop conf and other confs
      - [x] Unit tests
        - [x] StateStore object lifecycle and methods
        - [x] StateStoreCoordinator communication and logic
        - [x] StateStoreRDD fault-tolerance
        - [x] StateStoreRDD preferred location using StateStoreCoordinator
      - [ ] Cluster tests
        - [ ] Whether preferred locations are set correctly
        - [ ] Whether recovery works correctly with distributed storage
        - [x] Basic performance tests
      - [x] Docs
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #11645 from tdas/state-store.
      8c826880
    • Sameer Agarwal's avatar
      [SPARK-14015][SQL] Support TimestampType in vectorized parquet reader · 0a64294f
      Sameer Agarwal authored
      ## What changes were proposed in this pull request?
      
      This PR adds support for TimestampType in the vectorized parquet reader
      
      ## How was this patch tested?
      
      1. `VectorizedColumnReader` initially had a gating condition on `primitiveType.getPrimitiveTypeName() == PrimitiveType.PrimitiveTypeName.INT96)` that made us fall back on parquet-mr for handling timestamps. This condition is now removed.
      2. The `ParquetHadoopFsRelationSuite` (that tests for all supported hive types -- including `TimestampType`) fails when the gating condition is removed (https://github.com/apache/spark/pull/11808) and should now pass with this change. Similarly, the `ParquetHiveCompatibilitySuite.SPARK-10177 timestamp` test that fails when the gating condition is removed, should now pass as well.
      3.  Added tests in `HadoopFsRelationTest` that test both the dictionary encoded and non-encoded versions across all supported datatypes.
      
      Author: Sameer Agarwal <sameer@databricks.com>
      
      Closes #11882 from sameeragarwal/timestamp-parquet.
      0a64294f
    • Davies Liu's avatar
      [SPARK-14092] [SQL] move shouldStop() to end of while loop · 02d9c352
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      This PR rollback some changes in #11274 , which introduced some performance regression when do a simple aggregation on parquet scan with one integer column.
      
      Does not really understand how this change introduce this huge impact, maybe related show JIT compiler inline functions. (saw very different stats from profiling).
      
      ## How was this patch tested?
      
      Manually run the parquet reader benchmark, before this change:
      ```
      Intel(R) Core(TM) i7-4558U CPU  2.80GHz
      Int and String Scan:                Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      -------------------------------------------------------------------------------------------
      SQL Parquet Vectorized                   2391 / 3107         43.9          22.8       1.0X
      ```
      After this change
      ```
      Java HotSpot(TM) 64-Bit Server VM 1.7.0_60-b19 on Mac OS X 10.9.5
      Intel(R) Core(TM) i7-4558U CPU  2.80GHz
      Int and String Scan:                Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      -------------------------------------------------------------------------------------------
      SQL Parquet Vectorized                   2032 / 2626         51.6          19.4       1.0X```
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #11912 from davies/fix_regression.
      02d9c352
    • sethah's avatar
      [SPARK-13068][PYSPARK][ML] Type conversion for Pyspark params · 30bdb5cb
      sethah authored
      ## What changes were proposed in this pull request?
      
      This patch adds type conversion functionality for parameters in Pyspark. A `typeConverter` field is added to the constructor of `Param` class. This argument is a function which converts values passed to this param to the appropriate type if possible. This is beneficial so that the params can fail at set time if they are given inappropriate values, but even more so because coherent error messages are now provided when Py4J cannot cast the python type to the appropriate Java type.
      
      This patch also adds a `TypeConverters` class with factory methods for common type conversions. Most of the changes involve adding these factory type converters to existing params. The previous solution to this issue, `expectedType`, is deprecated and can be removed in 2.1.0 as discussed on the Jira.
      
      ## How was this patch tested?
      
      Unit tests were added in python/pyspark/ml/tests.py to test parameter type conversion. These tests check that values that should be convertible are converted correctly, and that the appropriate errors are thrown when invalid values are provided.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #11663 from sethah/SPARK-13068-tc.
      30bdb5cb
    • Ernest's avatar
      [SPARK-14055] writeLocksByTask need to be update when removeBlock · 48ee16d8
      Ernest authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-14055
      
      ## How was this patch tested?
      
      manual tests by running LiveJournalPageRank on a large dataset ( the dataset must larger enough to incure RDD partition eviction).
      
      Author: Ernest <earneyzxl@gmail.com>
      
      Closes #11875 from Earne/issue-14055.
      48ee16d8
    • Josh Rosen's avatar
      [SPARK-14075] Refactor MemoryStore to be testable independent of BlockManager · 3de24ae2
      Josh Rosen authored
      This patch refactors the `MemoryStore` so that it can be tested without needing to construct / mock an entire `BlockManager`.
      
      - The block manager's serialization- and compression-related methods have been moved from `BlockManager` to `SerializerManager`.
      - `BlockInfoManager `is now passed directly to classes that need it, rather than being passed via the `BlockManager`.
      - The `MemoryStore` now calls `dropFromMemory` via a new `BlockEvictionHandler` interface rather than directly calling the `BlockManager`. This change helps to enforce a narrow interface between the `MemoryStore` and `BlockManager` functionality and makes this interface easier to mock in tests.
      - Several of the block unrolling tests have been moved from `BlockManagerSuite` into a new `MemoryStoreSuite`.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11899 from JoshRosen/reduce-memorystore-blockmanager-coupling.
      3de24ae2
    • gatorsmile's avatar
      [SPARK-13549][SQL] Refactor the Optimizer Rule CollapseProject · 6ce008ba
      gatorsmile authored
      #### What changes were proposed in this pull request?
      
      The PR https://github.com/apache/spark/pull/10541 changed the rule `CollapseProject` by enabling collapsing `Project` into `Aggregate`. It leaves a to-do item to remove the duplicate code. This PR is to finish this to-do item. Also added a test case for covering this change.
      
      #### How was this patch tested?
      
      Added a new test case.
      
      liancheng Could you check if the code refactoring is fine? Thanks!
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #11427 from gatorsmile/collapseProjectRefactor.
      6ce008ba
    • Cheng Lian's avatar
      [SPARK-13817][SQL][MINOR] Renames Dataset.newDataFrame to Dataset.ofRows · cde086cb
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      This PR does the renaming as suggested by marmbrus in [this comment][1].
      
      ## How was this patch tested?
      
      Existing tests.
      
      [1]: https://github.com/apache/spark/commit/6d37e1eb90054cdb6323b75fb202f78ece604b15#commitcomment-16654694
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #11889 from liancheng/spark-13817-follow-up.
      cde086cb
    • Sun Rui's avatar
      [SPARK-14074][SPARKR] Specify commit sha1 ID when using install_github to install intr package. · 7d117501
      Sun Rui authored
      ## What changes were proposed in this pull request?
      
      In dev/lint-r.R, `install_github` makes our builds depend on a unstable source. This may cause un-expected test failures and then build break. This PR adds a specified commit sha1 ID to `install_github` to get a stable source.
      
      ## How was this patch tested?
      dev/lint-r
      
      Author: Sun Rui <rui.sun@intel.com>
      
      Closes #11913 from sun-rui/SPARK-14074.
      7d117501
    • Joseph K. Bradley's avatar
      [SPARK-14035][MLLIB] Make error message more verbose for mllib NaiveBayesSuite · 4d955cd6
      Joseph K. Bradley authored
      ## What changes were proposed in this pull request?
      
      Print more info about failed NaiveBayesSuite tests which have exhibited flakiness.
      
      ## How was this patch tested?
      
      Ran locally with incorrect check to cause failure.
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #11858 from jkbradley/naive-bayes-bug-log.
      4d955cd6
    • Shixiong Zhu's avatar
      [HOTFIX][SQL] Don't stop ContinuousQuery in quietly · abacf5f2
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Try to fix a flaky hang
      
      ## How was this patch tested?
      
      Existing Jenkins test
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #11909 from zsxwing/hotfix2.
      abacf5f2
    • Reynold Xin's avatar
      [SPARK-14088][SQL] Some Dataset API touch-up · 926a93e5
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      1. Deprecated unionAll. It is pretty confusing to have both "union" and "unionAll" when the two do the same thing in Spark but are different in SQL.
      2. Rename reduce in KeyValueGroupedDataset to reduceGroups so it is more consistent with rest of the functions in KeyValueGroupedDataset. Also makes it more obvious what "reduce" and "reduceGroups" mean. Previously it was confusing because it could be reducing a Dataset, or just reducing groups.
      3. Added a "name" function, which is more natural to name columns than "as" for non-SQL users.
      4. Remove "subtract" function since it is just an alias for "except".
      
      ## How was this patch tested?
      All changes should be covered by existing tests. Also added couple test cases to cover "name".
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #11908 from rxin/SPARK-14088.
      926a93e5
Loading