Skip to content
Snippets Groups Projects
  1. May 18, 2016
    • Davies Liu's avatar
      [SPARK-15357] Cooperative spilling should check consumer memory mode · 8fb1d1c7
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Since we support forced spilling for Spillable, which only works in OnHeap mode, different from other SQL operators (could be OnHeap or OffHeap), we should considering the mode of consumer before calling trigger forced spilling.
      
      ## How was this patch tested?
      
      Add new test.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #13151 from davies/fix_mode.
      8fb1d1c7
    • Tejas Patil's avatar
      [SPARK-15263][CORE] Make shuffle service dir cleanup faster by using `rm -rf` · c1fd9cac
      Tejas Patil authored
      ## What changes were proposed in this pull request?
      
      Jira: https://issues.apache.org/jira/browse/SPARK-15263
      
      The current logic for directory cleanup is slow because it does directory listing, recurses over child directories, checks for symbolic links, deletes leaf files and finally deletes the dirs when they are empty. There is back-and-forth switching from kernel space to user space while doing this. Since most of the deployment backends would be Unix systems, we could essentially just do `rm -rf` so that entire deletion logic runs in kernel space.
      
      The current Java based impl in Spark seems to be similar to what standard libraries like guava and commons IO do (eg. http://svn.apache.org/viewvc/commons/proper/io/trunk/src/main/java/org/apache/commons/io/FileUtils.java?view=markup#l1540). However, guava removed this method in favour of shelling out to an operating system command (like in this PR). See the `Deprecated` note in older javadocs for guava for details : http://google.github.io/guava/releases/10.0.1/api/docs/com/google/common/io/Files.html#deleteRecursively(java.io.File)
      
      Ideally, Java should be providing such APIs so that users won't have to do such things to get platform specific code. Also, its not just about speed, but also handling race conditions while doing at FS deletions is tricky. I could find this bug for Java in similar context : http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7148952
      
      ## How was this patch tested?
      
      I am relying on existing test cases to test the method. If there are suggestions about testing it, welcome to hear about it.
      
      ## Performance gains
      
      *Input setup* : Created a nested directory structure of depth 3 and each entry having 50 sub-dirs. The input being cleaned up had total ~125k dirs.
      
      Ran both approaches (in isolation) for 6 times to get average numbers:
      
      Native Java cleanup  | `rm -rf` as a separate process
      ------------ | -------------
      10.04 sec | 4.11 sec
      
      This change made deletion 2.4 times faster for the given test input.
      
      Author: Tejas Patil <tejasp@fb.com>
      
      Closes #13042 from tejasapatil/delete_recursive.
      c1fd9cac
    • DLucky's avatar
      [SPARK-15346][MLLIB] Reduce duplicate computation in picking initial points · 420b7006
      DLucky authored
      mateiz srowen
      
      I state that the contribution is my original work and that I license the work to the project under the project's open source license
      
      There's some format problems with my last PR, with HyukjinKwon 's help I read the guidance, re-check my code and PR, then run the tests, finally re-submit the PR request here.
      
      The related JIRA issue though marked as resolved, this change may relate to it I think.
      
      ## Proposed Change
      
      After picking each new initial centers, it's unnecessary to compute the distances between all the points and the old ones.
      Instead this change keeps the distance between all the points and their closest centers, and compare to the distance of them with the new center then update them.
      
      ## Test result
      
      One can find an easy test way in (https://issues.apache.org/jira/browse/SPARK-6706)
      
      I test the KMeans++ method for a small dataset with 16k points, and the whole KMeans|| with a large one with 240k points.
      The data has 4096 features and I tunes K from 100 to 500.
      The test environment was on my 4 machine cluster, I also tested a 3M points data on a larger cluster with 25 machines and got similar results, which I would not draw the detail curve. The result of the first two exps are shown below
      
      ### Local KMeans++ test:
      
      Dataset:4m_ini_center
      Data_size:16234
      Dimension:4096
      
      Lloyd's Iteration = 10
      The y-axis is time in sec, the x-axis is tuning the K.
      
      ![image](https://cloud.githubusercontent.com/assets/10915169/15175831/d0c92b82-179a-11e6-8b68-4e165fc2fdff.png)
      
      ![local_total](https://cloud.githubusercontent.com/assets/10915169/15175957/6b21c3b0-179b-11e6-9741-66dfe4e23eb7.jpg)
      
      ### On a larger dataset
      
      An improve show in the graph but not commit in this file: In this experiment I also have an improvement for calculation in normalization data (the distance is convert to the cosine distance). As if the data is normalized into (0,1), one improvement in the original vesion for util.MLUtils.fastSauaredDistance would have no effect (the precisionBound 2.0 * EPSILON * sumSquaredNorm / (normDiff * normDiff + EPSILON) will never less then precision in this case). Therefore I design an early terminal method when comparing two distance (used for findClosest). But I don't include this improve in this file, you may only refer to the curves without "normalize" for comparing the results.
      
      Dataset:4k24
      Data_size:243960
      Dimension:4096
      
      Normlize 	Enlarge 	Initialize 	Lloyd's_Iteration
      NO    	1 	         3 	          5
      YES 	        10000 	 3 	          5
      
      Notice: the normlized data is enlarged to ensure precision
      
      The cost time: x-for value of K, y-for time in sec
      ![4k24_total](https://cloud.githubusercontent.com/assets/10915169/15176635/9a54c0bc-179e-11e6-81c5-238e0c54bce2.jpg)
      
      SE for unnormalized data between two version, to ensure the correctness
      ![4k24_unnorm_se](https://cloud.githubusercontent.com/assets/10915169/15176661/b85dabc8-179e-11e6-9269-fe7d2101dd48.jpg)
      
      Here is the SE between normalized data just for reference, it's also correct.
      ![4k24_norm_se](https://cloud.githubusercontent.com/assets/10915169/15176742/1fbde940-179f-11e6-8290-d24b0dd4a4f7.jpg)
      
      Author: DLucky <mouendless@gmail.com>
      
      Closes #13133 from mouendless/patch-2.
      420b7006
    • Cheng Lian's avatar
      [SPARK-15334][SQL][HOTFIX] Fixes compilation error for Scala 2.10 · c4a45fd8
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      This PR fixes a Scala 2.10 compilation failure introduced in PR #13127.
      
      ## How was this patch tested?
      
      Jenkins build.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #13166 from liancheng/hotfix-for-scala-2.10.
      c4a45fd8
    • Dongjoon Hyun's avatar
      [MINOR][SQL] Remove unused pattern matching variables in Optimizers. · d2f81df1
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR removes unused pattern matching variable in Optimizers in order to improve readability.
      
      ## How was this patch tested?
      
      Pass the existing Jenkins tests.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13145 from dongjoon-hyun/remove_unused_pattern_matching_variables.
      d2f81df1
    • WeichenXu's avatar
      [SPARK-15322][MLLIB][CORE][SQL] update deprecate accumulator usage into... · 2f9047b5
      WeichenXu authored
      [SPARK-15322][MLLIB][CORE][SQL] update deprecate accumulator usage into accumulatorV2 in spark project
      
      ## What changes were proposed in this pull request?
      
      I use Intellj-IDEA to search usage of deprecate SparkContext.accumulator in the whole spark project, and update the code.(except those test code for accumulator method itself)
      
      ## How was this patch tested?
      
      Exisiting unit tests
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #13112 from WeichenXu123/update_accuV2_in_mllib.
      2f9047b5
    • Davies Liu's avatar
      [SPARK-15307][SQL] speed up listing files for data source · 33814f88
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, listing files is very slow if there is thousands files, especially on local file system, because:
      1) FileStatus.getPermission() is very slow on local file system, which is launch a subprocess and parse the stdout.
      2) Create an JobConf is very expensive (ClassUtil.findContainingJar() is slow).
      
      This PR improve these by:
      1) Use another constructor of LocatedFileStatus to avoid calling FileStatus.getPermission, the permissions are not used for data sources.
      2) Only create an JobConf once within one task.
      
      ## How was this patch tested?
      
      Manually tests on a partitioned table with 1828 partitions, decrease the time to load the table from 22 seconds to 1.6 seconds (Most of time are spent in merging schema now).
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #13094 from davies/listing.
      33814f88
    • Sean Zhong's avatar
      [SPARK-15334][SQL] HiveClient facade not compatible with Hive 0.12 · 6e02aec4
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      HiveClient facade is not compatible with Hive 0.12.
      
      This PR Fixes the following compatibility issues:
      1. `org.apache.spark.sql.hive.client.HiveClientImpl` use `AddPartitionDesc(db, table, ignoreIfExists)` to create partitions, however, Hive 0.12 doesn't have this constructor for `AddPartitionDesc`.
      2. `HiveClientImpl` uses `PartitionDropOptions` when dropping partition, however, class `PartitionDropOptions` doesn't exist in Hive 0.12.
      3. Hive 0.12 doesn't support adding permanent functions. It is not valid to call `org.apache.hadoop.hive.ql.metadata.Hive.createFunction`, `org.apache.hadoop.hive.ql.metadata.Hive.alterFunction`, and `org.apache.hadoop.hive.ql.metadata.Hive.alterFunction`
      4. `org.apache.spark.sql.hive.client.VersionsSuite` doesn't have enough test coverage for different hive versions 0.12, 0.13, 0.14, 1.0.0, 1.1.0, 1.2.0.
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13127 from clockfly/versionSuite.
      6e02aec4
    • Takuya Kuwahara's avatar
      [SPARK-14978][PYSPARK] PySpark TrainValidationSplitModel should support validationMetrics · 411c04ad
      Takuya Kuwahara authored
      ## What changes were proposed in this pull request?
      
      This pull request includes supporting validationMetrics for TrainValidationSplitModel with Python and test for it.
      
      ## How was this patch tested?
      
      test in `python/pyspark/ml/tests.py`
      
      Author: Takuya Kuwahara <taakuu19@gmail.com>
      
      Closes #12767 from taku-k/spark-14978.
      411c04ad
  2. May 17, 2016
    • Yin Huai's avatar
      [SPARK-14346] Fix scala-2.10 build · 2a5db9c1
      Yin Huai authored
      ## What changes were proposed in this pull request?
      Scala 2.10 build was broken by #13079. I am reverting the change of that line.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #13157 from yhuai/SPARK-14346-fix-scala2.10.
      2a5db9c1
    • Sean Zhong's avatar
      [SPARK-15171][SQL] Remove the references to deprecated method dataset.registerTempTable · 25b315e6
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      Update the unit test code, examples, and documents to remove calls to deprecated method `dataset.registerTempTable`.
      
      ## How was this patch tested?
      
      This PR only changes the unit test code, examples, and comments. It should be safe.
      This is a follow up of PR https://github.com/apache/spark/pull/12945 which was merged.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13098 from clockfly/spark-15171-remove-deprecation.
      25b315e6
    • Cheng Lian's avatar
      [SPARK-14346][SQL] Native SHOW CREATE TABLE for Hive tables/views · b674e67c
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      This is a follow-up of #12781. It adds native `SHOW CREATE TABLE` support for Hive tables and views. A new field `hasUnsupportedFeatures` is added to `CatalogTable` to indicate whether all table metadata retrieved from the concrete underlying external catalog (i.e. Hive metastore in this case) can be mapped to fields in `CatalogTable`. This flag is useful when the target Hive table contains structures that can't be handled by Spark SQL, e.g., skewed columns and storage handler, etc..
      
      ## How was this patch tested?
      
      New test cases are added in `ShowCreateTableSuite` to do round-trip tests.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #13079 from liancheng/spark-14346-show-create-table-for-hive-tables.
      b674e67c
    • Shixiong Zhu's avatar
      [SPARK-11735][CORE][SQL] Add a check in the constructor of... · 8e8bc9f9
      Shixiong Zhu authored
      [SPARK-11735][CORE][SQL] Add a check in the constructor of SQLContext/SparkSession to make sure its SparkContext is not stopped
      
      ## What changes were proposed in this pull request?
      
      Add a check in the constructor of SQLContext/SparkSession to make sure its SparkContext is not stopped.
      
      ## How was this patch tested?
      
      Jenkins unit tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #13154 from zsxwing/check-spark-context-stop.
      8e8bc9f9
    • Dongjoon Hyun's avatar
      [SPARK-15244] [PYTHON] Type of column name created with createDataFrame is not consistent. · 0f576a57
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      **createDataFrame** returns inconsistent types for column names.
      ```python
      >>> from pyspark.sql.types import StructType, StructField, StringType
      >>> schema = StructType([StructField(u"col", StringType())])
      >>> df1 = spark.createDataFrame([("a",)], schema)
      >>> df1.columns # "col" is str
      ['col']
      >>> df2 = spark.createDataFrame([("a",)], [u"col"])
      >>> df2.columns # "col" is unicode
      [u'col']
      ```
      
      The reason is only **StructField** has the following code.
      ```
      if not isinstance(name, str):
          name = name.encode('utf-8')
      ```
      This PR adds the same logic into **createDataFrame** for consistency.
      ```
      if isinstance(schema, list):
          schema = [x.encode('utf-8') if not isinstance(x, str) else x for x in schema]
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins test (with new python doctest)
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13097 from dongjoon-hyun/SPARK-15244.
      0f576a57
    • DB Tsai's avatar
      [SPARK-14615][ML] Use the new ML Vector and Matrix in the ML pipeline based algorithms · e2efe052
      DB Tsai authored
      ## What changes were proposed in this pull request?
      
      Once SPARK-14487 and SPARK-14549 are merged, we will migrate to use the new vector and matrix type in the new ml pipeline based apis.
      
      ## How was this patch tested?
      
      Unit tests
      
      Author: DB Tsai <dbt@netflix.com>
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #12627 from dbtsai/SPARK-14615-NewML.
      e2efe052
    • Dongjoon Hyun's avatar
      [MINOR][DOCS] Replace remaining 'sqlContext' in ScalaDoc/JavaDoc. · 9f176dd3
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      According to the recent change, this PR replaces all the remaining `sqlContext` usage with `spark` in ScalaDoc/JavaDoc (.scala/.java files) except `SQLContext.scala`, `SparkPlan.scala', and `DatasetHolder.scala`.
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13125 from dongjoon-hyun/minor_doc_sparksession.
      9f176dd3
    • Yuhao Yang's avatar
      [SPARK-15182][ML] Copy MLlib doc to ML: ml.feature.tf, idf · 3308a862
      Yuhao Yang authored
      ## What changes were proposed in this pull request?
      
      We should now begin copying algorithm details from the spark.mllib guide to spark.ml as needed, rather than just linking back to the corresponding algorithms in the spark.mllib user guide.
      
      ## How was this patch tested?
      
      manual review for doc.
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      Author: Yuhao Yang <yuhao.yang@intel.com>
      
      Closes #12957 from hhbyyh/tfidfdoc.
      3308a862
    • hyukjinkwon's avatar
      [SPARK-10216][SQL] Avoid creating empty files during overwriting with group by query · 8d05a7a9
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Currently, `INSERT INTO` with `GROUP BY` query tries to make at least 200 files (default value of `spark.sql.shuffle.partition`), which results in lots of empty files.
      
      This PR makes it avoid creating empty files during overwriting into Hive table and in internal data sources  with group by query.
      
      This checks whether the given partition has data in it or not and creates/writes file only when it actually has data.
      
      ## How was this patch tested?
      
      Unittests in `InsertIntoHiveTableSuite` and `HadoopFsRelationTest`.
      
      Closes #8411
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: Keuntae Park <sirpkt@apache.org>
      
      Closes #12855 from HyukjinKwon/pr/8411.
      8d05a7a9
    • Wenchen Fan's avatar
      [SPARK-14346][SQL][FOLLOW-UP] add tests for CREAT TABLE USING with partition and bucket · 20a89478
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      https://github.com/apache/spark/pull/12781 introduced PARTITIONED BY, CLUSTERED BY, and SORTED BY keywords to CREATE TABLE USING. This PR adds tests to make sure those keywords are handled correctly.
      
      This PR also fixes a mistake that we should create non-hive-compatible table if partition or bucket info exists.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13144 from cloud-fan/add-test.
      20a89478
    • Kousuke Saruta's avatar
      [SPARK-15165] [SQL] Codegen can break because toCommentSafeString is not actually safe · c0c3ec35
      Kousuke Saruta authored
      ## What changes were proposed in this pull request?
      
      toCommentSafeString method replaces "\u" with "\\\\u" to avoid codegen breaking.
      But if the even number of "\" is put before "u", like "\\\\u", in the string literal in the query, codegen can break.
      
      Following code causes compilation error.
      
      ```
      val df = Seq(...).toDF
      df.select("'\\\\\\\\u002A/'").show
      ```
      
      The reason of the compilation error is because "\\\\\\\\\\\\\\\\u002A/" is translated into "*/" (the end of comment).
      
      Due to this unsafety, arbitrary code can be injected like as follows.
      
      ```
      val df = Seq(...).toDF
      // Inject "System.exit(1)"
      df.select("'\\\\\\\\u002A/{System.exit(1);}/*'").show
      ```
      
      ## How was this patch tested?
      
      Added new test cases.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      Author: sarutak <sarutak@oss.nttdata.co.jp>
      
      Closes #12939 from sarutak/SPARK-15165.
      c0c3ec35
    • wm624@hotmail.com's avatar
      [SPARK-15318][ML][EXAMPLE] spark.ml Collaborative Filtering example does not work in spark-shell · bebe5f98
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      (Please fill in changes proposed in this fix)
      
      copy & paste example in ml-collaborative-filtering.html into spark-shell, we see the following errors.
      scala> case class Rating(userId: Int, movieId: Int, rating: Float, timestamp: Long)
      defined class Rating
      
      scala> object Rating {
      def parseRating(str: String): Rating = { | val fields = str.split("::") | assert(fields.size == 4) | Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, fields(3).toLong) | }
      }
      <console>:29: error: Rating.type does not take parameters
      Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, fields(3).toLong)
      ^
      In standard scala repl, it has the same error.
      
      Scala/spark-shell repl has some quirks (e.g. packages are also not well supported).
      
      The reason of errors is that scala/spark-shell repl discards previous definitions when we define the Object with the same class name. Solution: We can rename the Object Rating.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      Manually test it: 1). ./bin/run-example ALSExample
      2). copy & paste example in the generated document. It works fine.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #13110 from wangmiao1981/repl.
      bebe5f98
    • Sean Owen's avatar
      [SPARK-15333][DOCS] Reorganize building-spark.md; rationalize vs wiki · 932d8002
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      See JIRA for the motivation. The changes are almost entirely movement of text and edits to sections. Minor changes to text include:
      
      - Copying in / merging text from the "Useful Developer Tools" wiki, in areas of
        - Docker
        - R
        - Running one test
      - standardizing on ./build/mvn not mvn, and likewise for ./build/sbt
      - correcting some typos
      - standardizing code block formatting
      
      No text has been removed from this doc; text has been imported from the https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools wiki
      
      ## How was this patch tested?
      
      Jekyll doc build and inspection of resulting HTML in browser.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13124 from srowen/SPARK-15333.
      932d8002
    • wm624@hotmail.com's avatar
      [SPARK-14434][ML] User guide doc and examples for GaussianMixture in spark.ml · 4134ff0c
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      (Please fill in changes proposed in this fix)
      
      Add guide doc and examples for GaussianMixture in Spark.ml in Java, Scala and Python.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      Manual compile and test all examples
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #12788 from wangmiao1981/example.
      4134ff0c
    • Wenchen Fan's avatar
      [SPARK-15351][SQL] RowEncoder should support array as the external type for ArrayType · c36ca651
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      This PR improves `RowEncoder` and `MapObjects`, to support array as the external type for `ArrayType`. The idea is straightforward, we use `Object` as the external input type for `ArrayType`, and determine its type at runtime in `MapObjects`.
      
      ## How was this patch tested?
      
      new test in `RowEncoderSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13138 from cloud-fan/map-object.
      c36ca651
    • Sean Owen's avatar
      [SPARK-15290][BUILD] Move annotations, like @Since / @DeveloperApi, into spark-tags · 122302cb
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      (See https://github.com/apache/spark/pull/12416 where most of this was already reviewed and committed; this is just the module structure and move part. This change does not move the annotations into test scope, which was the apparently problem last time.)
      
      Rename `spark-test-tags` -> `spark-tags`; move common annotations like `Since` to `spark-tags`
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13074 from srowen/SPARK-15290.
      122302cb
    • Xiangrui Meng's avatar
      [SPARK-14906][ML] Copy linalg in PySpark to new ML package · 8ad9f08c
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      Copy the linalg (Vector/Matrix and VectorUDT/MatrixUDT) in PySpark to new ML package.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Xiangrui Meng <meng@databricks.com>
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #13099 from viirya/move-pyspark-vector-matrix-udt4.
      8ad9f08c
  3. May 16, 2016
  4. May 15, 2016
    • Sean Zhong's avatar
      [SPARK-15253][SQL] Support old table schema config key... · 4a5ee195
      Sean Zhong authored
      [SPARK-15253][SQL] Support old table schema config key "spark.sql.sources.schema" for DESCRIBE TABLE
      
      ## What changes were proposed in this pull request?
      
      "DESCRIBE table" is broken when table schema is stored at key "spark.sql.sources.schema".
      
      Originally, we used spark.sql.sources.schema to store the schema of a data source table.
      After SPARK-6024, we removed this flag. Although we are not using spark.sql.sources.schema any more, we need to still support it.
      
      ## How was this patch tested?
      
      Unit test.
      
      When using spark2.0 to load a table generated by spark 1.2.
      Before change:
      `DESCRIBE table` => Schema of this table is inferred at runtime,,
      
      After change:
      `DESCRIBE table` => correct output.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13073 from clockfly/spark-15253.
      4a5ee195
    • Zheng RuiFeng's avatar
      [MINOR] Fix Typos · c7efc56c
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      1,Rename matrix args in BreezeUtil to upper to match the doc
      2,Fix several typos in ML and SQL
      
      ## How was this patch tested?
      manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #13078 from zhengruifeng/fix_ann.
      c7efc56c
    • Sean Owen's avatar
      [SPARK-12972][CORE] Update org.apache.httpcomponents.httpclient · f5576a05
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      (Retry of https://github.com/apache/spark/pull/13049)
      
      - update to httpclient 4.5 / httpcore 4.4
      - remove some defunct exclusions
      - manage httpmime version to match
      - update selenium / httpunit to support 4.5 (possible now that Jetty 9 is used)
      
      ## How was this patch tested?
      
      Jenkins tests. Also, locally running the same test command of one Jenkins profile that failed: `mvn -Phadoop-2.6 -Pyarn -Phive -Phive-thriftserver -Pkinesis-asl ...`
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13117 from srowen/SPARK-12972.2.
      f5576a05
  5. May 14, 2016
    • wm624@hotmail.com's avatar
      [SPARK-15096][ML] LogisticRegression MultiClassSummarizer numClasses can fail... · 354f8f11
      wm624@hotmail.com authored
      [SPARK-15096][ML] LogisticRegression MultiClassSummarizer numClasses can fail if no valid labels are found
      
      ## What changes were proposed in this pull request?
      
      (Please fill in changes proposed in this fix)
      Throw better exception when numClasses is empty and empty.max is thrown.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      Add a new unit test, which calls histogram with empty numClasses.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #12969 from wangmiao1981/logisticR.
      354f8f11
    • Nicholas Tietz's avatar
      [SPARK-15197][DOCS] Added Scaladoc for countApprox and countByValueApprox parameters · 0f1f31d3
      Nicholas Tietz authored
      This pull request simply adds Scaladoc documentation of the parameters for countApprox and countByValueApprox.
      
      This is an important documentation change, as it clarifies what should be passed in for the timeout. Without units, this was previously unclear.
      
      I did not open a JIRA ticket per my understanding of the project contribution guidelines; as they state, the description in the ticket would be essentially just what is in the PR. If I should open one, let me know and I will do so.
      
      Author: Nicholas Tietz <nicholas.tietz@crosschx.com>
      
      Closes #12955 from ntietz/rdd-countapprox-docs.
      0f1f31d3
  6. May 13, 2016
    • Tejas Patil's avatar
      [TRIVIAL] Add () to SparkSession's builder function · 4210e2a6
      Tejas Patil authored
      ## What changes were proposed in this pull request?
      
      Was trying out `SparkSession` for the first time and the given class doc (when copied as is) did not work over Spark shell:
      
      ```
      scala> SparkSession.builder().master("local").appName("Word Count").getOrCreate()
      <console>:27: error: org.apache.spark.sql.SparkSession.Builder does not take parameters
             SparkSession.builder().master("local").appName("Word Count").getOrCreate()
      ```
      
      Adding () to the builder method in SparkSession.
      
      ## How was this patch tested?
      
      ```
      scala> SparkSession.builder().master("local").appName("Word Count").getOrCreate()
      res0: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession65c17e38
      
      scala> SparkSession.builder.master("local").appName("Word Count").getOrCreate()
      res1: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession65c17e38
      ```
      
      Author: Tejas Patil <tejasp@fb.com>
      
      Closes #13086 from tejasapatil/doc_correction.
      4210e2a6
    • hyukjinkwon's avatar
      [SPARK-15267][SQL] Refactor options for JDBC and ORC data sources and change... · 3ded5bc4
      hyukjinkwon authored
      [SPARK-15267][SQL] Refactor options for JDBC and ORC data sources and change default compression for ORC
      
      ## What changes were proposed in this pull request?
      
      Currently, Parquet, JSON and CSV data sources have a class for thier options, (`ParquetOptions`, `JSONOptions` and `CSVOptions`).
      
      It is convenient to manage options for sources to gather options into a class. Currently, `JDBC`, `Text`, `libsvm` and `ORC` datasources do not have this class. This might be nicer if these options are in a unified format so that options can be added and
      
      This PR refactors the options in Spark internal data sources adding new classes, `OrcOptions`, `TextOptions`, `JDBCOptions` and `LibSVMOptions`.
      
      Also, this PR change the default compression codec for ORC from `NONE` to `SNAPPY`.
      
      ## How was this patch tested?
      
      Existing tests should cover this for refactoring and unittests in `OrcHadoopFsRelationSuite` for changing the default compression codec for ORC.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #13048 from HyukjinKwon/SPARK-15267.
      3ded5bc4
    • Sean Owen's avatar
      10a83896
    • Sean Owen's avatar
      [SPARK-12972][CORE] Update org.apache.httpcomponents.httpclient · c74a6c3f
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      - update httpcore/httpclient to latest
      - centralize version management
      - remove excludes that are no longer relevant according to SBT/Maven dep graphs
      - also manage httpmime to match httpclient
      
      ## How was this patch tested?
      
      Jenkins tests, plus review of dependency graphs from SBT/Maven, and review of test-dependencies.sh  output
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13049 from srowen/SPARK-12972.
      c74a6c3f
    • Holden Karau's avatar
      [SPARK-15061][PYSPARK] Upgrade to Py4J 0.10.1 · 382dbc12
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      This upgrades to Py4J 0.10.1 which reduces syscal overhead in Java gateway ( see https://github.com/bartdag/py4j/issues/201 ). Related https://issues.apache.org/jira/browse/SPARK-6728 .
      
      ## How was this patch tested?
      
      Existing doctests & unit tests pass
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #13064 from holdenk/SPARK-15061-upgrade-to-py4j-0.10.1.
      382dbc12
Loading