Skip to content
Snippets Groups Projects
  1. Nov 25, 2016
  2. Nov 24, 2016
  3. Nov 23, 2016
    • Shixiong Zhu's avatar
      [SPARK-18510][SQL] Follow up to address comments in #15951 · 27d81d00
      Shixiong Zhu authored
      
      ## What changes were proposed in this pull request?
      
      This PR addressed the rest comments in #15951.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15997 from zsxwing/SPARK-18510-follow-up.
      
      (cherry picked from commit 223fa218)
      Signed-off-by: default avatarTathagata Das <tathagata.das1565@gmail.com>
      27d81d00
    • Burak Yavuz's avatar
      [SPARK-18510] Fix data corruption from inferred partition column dataTypes · 15d2cf26
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      ### The Issue
      
      If I specify my schema when doing
      ```scala
      spark.read
        .schema(someSchemaWherePartitionColumnsAreStrings)
      ```
      but if the partition inference can infer it as IntegerType or I assume LongType or DoubleType (basically fixed size types), then once UnsafeRows are generated, your data will be corrupted.
      
      ### Proposed solution
      
      The partition handling code path is kind of a mess. In my fix I'm probably adding to the mess, but at least trying to standardize the code path.
      
      The real issue is that a user that uses the `spark.read` code path can never clearly specify what the partition columns are. If you try to specify the fields in `schema`, we practically ignore what the user provides, and fall back to our inferred data types. What happens in the end is data corruption.
      
      My solution tries to fix this by always trying to infer partition columns the first time you specify the table. Once we find what the partition columns are, we try to find them in the user specified schema and use the dataType provided there, or fall back to the smallest common data type.
      
      We will ALWAYS append partition columns to the user's schema, even if they didn't ask for it. We will only use the data type they provided if they specified it. While this is confusing, this has been the behavior since Spark 1.6, and I didn't want to change this behavior in the QA period of Spark 2.1. We may revisit this decision later.
      
      A side effect of this PR is that we won't need https://github.com/apache/spark/pull/15942
      
       if this PR goes in.
      
      ## How was this patch tested?
      
      Regression tests
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15951 from brkyvz/partition-corruption.
      
      (cherry picked from commit 0d1bf2b6)
      Signed-off-by: default avatarTathagata Das <tathagata.das1565@gmail.com>
      15d2cf26
    • Wenchen Fan's avatar
      [SPARK-18050][SQL] do not create default database if it already exists · 835f03f3
      Wenchen Fan authored
      
      ## What changes were proposed in this pull request?
      
      When we try to create the default database, we ask hive to do nothing if it already exists. However, Hive will log an error message instead of doing nothing, and the error message is quite annoying and confusing.
      
      In this PR, we only create default database if it doesn't exist.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15993 from cloud-fan/default-db.
      
      (cherry picked from commit f129ebcd)
      Signed-off-by: default avatarAndrew Or <andrewor14@gmail.com>
      835f03f3
    • Reynold Xin's avatar
      [SPARK-18522][SQL] Explicit contract for column stats serialization · 599dac15
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      The current implementation of column stats uses the base64 encoding of the internal UnsafeRow format to persist statistics (in table properties in Hive metastore). This is an internal format that is not stable across different versions of Spark and should NOT be used for persistence. In addition, it would be better if statistics stored in the catalog is human readable.
      
      This pull request introduces the following changes:
      
      1. Created a single ColumnStat class to for all data types. All data types track the same set of statistics.
      2. Updated the implementation for stats collection to get rid of the dependency on internal data structures (e.g. InternalRow, or storing DateType as an int32). For example, previously dates were stored as a single integer, but are now stored as java.sql.Date. When we implement the next steps of CBO, we can add code to convert those back into internal types again.
      3. Documented clearly what JVM data types are being used to store what data.
      4. Defined a simple Map[String, String] interface for serializing and deserializing column stats into/from the catalog.
      5. Rearranged the method/function structure so it is more clear what the supported data types are, and also moved how stats are generated into ColumnStat class so they are easy to find.
      
      ## How was this patch tested?
      Removed most of the original test cases created for column statistics, and added three very simple ones to cover all the cases. The three test cases validate:
      1. Roundtrip serialization works.
      2. Behavior when analyzing non-existent column or unsupported data type column.
      3. Result for stats collection for all valid data types.
      
      Also moved parser related tests into a parser test suite and added an explicit serialization test for the Hive external catalog.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15959 from rxin/SPARK-18522.
      
      (cherry picked from commit 70ad07a9)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      599dac15
    • Reynold Xin's avatar
      [SPARK-18557] Downgrade confusing memory leak warning message · e11d7c68
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      TaskMemoryManager has a memory leak detector that gets called at task completion callback and checks whether any memory has not been released. If they are not released by the time the callback is invoked, TaskMemoryManager releases them.
      
      The current error message says something like the following:
      ```
      WARN  [Executor task launch worker-0]
      org.apache.spark.memory.TaskMemoryManager - leak 16.3 MB memory from
      org.apache.spark.unsafe.map.BytesToBytesMap33fb6a15
      In practice, there are multiple reasons why these can be triggered in the normal code path (e.g. limit, or task failures), and the fact that these messages are log means the "leak" is fixed by TaskMemoryManager.
      ```
      
      To not confuse users, this patch downgrade the message from warning to debug level, and avoids using the word "leak" since it is not actually a leak.
      
      ## How was this patch tested?
      N/A - this is a simple logging improvement.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15989 from rxin/SPARK-18557.
      
      (cherry picked from commit 9785ed40)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      e11d7c68
    • Eric Liang's avatar
      [SPARK-18545][SQL] Verify number of hive client RPCs in PartitionedTablePerfStatsSuite · 539c193a
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      This would help catch accidental O(n) calls to the hive client as in https://issues.apache.org/jira/browse/SPARK-18507
      
      ## How was this patch tested?
      
      Checked that the test fails before https://issues.apache.org/jira/browse/SPARK-18507
      
       was patched. cc cloud-fan
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #15985 from ericl/spark-18545.
      
      (cherry picked from commit 85235ed6)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      539c193a
    • Wenchen Fan's avatar
      [SPARK-18053][SQL] compare unsafe and safe complex-type values correctly · ebeb0514
      Wenchen Fan authored
      
      ## What changes were proposed in this pull request?
      
      In Spark SQL, some expression may output safe format values, e.g. `CreateArray`, `CreateStruct`, `Cast`, etc. When we compare 2 values, we should be able to compare safe and unsafe formats.
      
      The `GreaterThan`, `LessThan`, etc. in Spark SQL already handles it, but the `EqualTo` doesn't. This PR fixes it.
      
      ## How was this patch tested?
      
      new unit test and regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15929 from cloud-fan/type-aware.
      
      (cherry picked from commit 84284e8c)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      ebeb0514
    • Sean Owen's avatar
      [SPARK-18073][DOCS][WIP] Migrate wiki to spark.apache.org web site · 5f198d20
      Sean Owen authored
      
      ## What changes were proposed in this pull request?
      
      Updates links to the wiki to links to the new location of content on spark.apache.org.
      
      ## How was this patch tested?
      
      Doc builds
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15967 from srowen/SPARK-18073.1.
      
      (cherry picked from commit 7e0cd1d9)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      5f198d20
    • hyukjinkwon's avatar
      [SPARK-18179][SQL] Throws analysis exception with a proper message for... · fabb5aea
      hyukjinkwon authored
      [SPARK-18179][SQL] Throws analysis exception with a proper message for unsupported argument types in reflect/java_method function
      
      ## What changes were proposed in this pull request?
      
      This PR proposes throwing an `AnalysisException` with a proper message rather than `NoSuchElementException` with the message ` key not found: TimestampType` when unsupported types are given to `reflect` and `java_method` functions.
      
      ```scala
      spark.range(1).selectExpr("reflect('java.lang.String', 'valueOf', cast('1990-01-01' as timestamp))")
      ```
      
      produces
      
      **Before**
      
      ```
      java.util.NoSuchElementException: key not found: TimestampType
        at scala.collection.MapLike$class.default(MapLike.scala:228)
        at scala.collection.AbstractMap.default(Map.scala:59)
        at scala.collection.MapLike$class.apply(MapLike.scala:141)
        at scala.collection.AbstractMap.apply(Map.scala:59)
        at org.apache.spark.sql.catalyst.expressions.CallMethodViaReflection$$anonfun$findMethod$1$$anonfun$apply$1.apply(CallMethodViaReflection.scala:159)
      ...
      ```
      
      **After**
      
      ```
      cannot resolve 'reflect('java.lang.String', 'valueOf', CAST('1990-01-01' AS TIMESTAMP))' due to data type mismatch: arguments from the third require boolean, byte, short, integer, long, float, double or string expressions; line 1 pos 0;
      'Project [unresolvedalias(reflect(java.lang.String, valueOf, cast(1990-01-01 as timestamp)), Some(<function1>))]
      +- Range (0, 1, step=1, splits=Some(2))
      ...
      ```
      
      Added message is,
      
      ```
      arguments from the third require boolean, byte, short, integer, long, float, double or string expressions
      ```
      
      ## How was this patch tested?
      
      Tests added in `CallMethodViaReflection`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15694 from HyukjinKwon/SPARK-18179.
      
      (cherry picked from commit 2559fb4b)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      fabb5aea
  4. Nov 22, 2016
    • Yanbo Liang's avatar
      [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data · fc5fee83
      Yanbo Liang authored
      
      ## What changes were proposed in this pull request?
      * Fix SparkR ```spark.glm``` errors when fitting on collinear data, since ```standard error of coefficients, t value and p value``` are not available in this condition.
      * Scala/Python GLM summary should throw exception if users get ```standard error of coefficients, t value and p value``` but the underlying WLS was solved by local "l-bfgs".
      
      ## How was this patch tested?
      Add unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15930 from yanboliang/spark-18501.
      
      (cherry picked from commit 982b82e3)
      Signed-off-by: default avatarYanbo Liang <ybliang8@gmail.com>
      fc5fee83
    • Shixiong Zhu's avatar
      [SPARK-18530][SS][KAFKA] Change Kafka timestamp column type to TimestampType · 3be2d1e0
      Shixiong Zhu authored
      
      ## What changes were proposed in this pull request?
      
      Changed Kafka timestamp column type to TimestampType.
      
      ## How was this patch tested?
      
      `test("Kafka column types")`.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15969 from zsxwing/SPARK-18530.
      
      (cherry picked from commit d0212eb0)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      3be2d1e0
    • Dilip Biswal's avatar
      [SPARK-18533] Raise correct error upon specification of schema for datasource... · 4b96ffb1
      Dilip Biswal authored
      [SPARK-18533] Raise correct error upon specification of schema for datasource tables created using CTAS
      
      ## What changes were proposed in this pull request?
      Fixes the inconsistency of error raised between data source and hive serde
      tables when schema is specified in CTAS scenario. In the process the grammar for
      create table (datasource) is simplified.
      
      **before:**
      ``` SQL
      spark-sql> create table t2 (c1 int, c2 int) using parquet as select * from t1;
      Error in query:
      mismatched input 'as' expecting {<EOF>, '.', 'OPTIONS', 'CLUSTERED', 'PARTITIONED'}(line 1, pos 64)
      
      == SQL ==
      create table t2 (c1 int, c2 int) using parquet as select * from t1
      ----------------------------------------------------------------^^^
      ```
      
      **After:**
      ```SQL
      spark-sql> create table t2 (c1 int, c2 int) using parquet as select * from t1
               > ;
      Error in query:
      Operation not allowed: Schema may not be specified in a Create Table As Select (CTAS) statement(line 1, pos 0)
      
      == SQL ==
      create table t2 (c1 int, c2 int) using parquet as select * from t1
      ^^^
      ```
      ## How was this patch tested?
      Added a new test in CreateTableAsSelectSuite
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      
      Closes #15968 from dilipbiswal/ctas.
      
      (cherry picked from commit 39a1d306)
      Signed-off-by: default avatargatorsmile <gatorsmile@gmail.com>
      4b96ffb1
    • gatorsmile's avatar
      [SPARK-16803][SQL] SaveAsTable does not work when target table is a Hive serde table · 64b9de9c
      gatorsmile authored
      
      ### What changes were proposed in this pull request?
      
      In Spark 2.0, `SaveAsTable` does not work when the target table is a Hive serde table, but Spark 1.6 works.
      
      **Spark 1.6**
      
      ``` Scala
      scala> sql("create table sample.sample stored as SEQUENCEFILE as select 1 as key, 'abc' as value")
      res2: org.apache.spark.sql.DataFrame = []
      
      scala> val df = sql("select key, value as value from sample.sample")
      df: org.apache.spark.sql.DataFrame = [key: int, value: string]
      
      scala> df.write.mode("append").saveAsTable("sample.sample")
      
      scala> sql("select * from sample.sample").show()
      +---+-----+
      |key|value|
      +---+-----+
      |  1|  abc|
      |  1|  abc|
      +---+-----+
      ```
      
      **Spark 2.0**
      
      ``` Scala
      scala> df.write.mode("append").saveAsTable("sample.sample")
      org.apache.spark.sql.AnalysisException: Saving data in MetastoreRelation sample, sample
       is not supported.;
      ```
      
      So far, we do not plan to support it in Spark 2.1 due to the risk. Spark 1.6 works because it internally uses insertInto. But, if we change it back it will break the semantic of saveAsTable (this method uses by-name resolution instead of using by-position resolution used by insertInto). More extra changes are needed to support `hive` as a `format` in DataFrameWriter.
      
      Instead, users should use insertInto API. This PR corrects the error messages. Users can understand how to bypass it before we support it in a separate PR.
      ### How was this patch tested?
      
      Test cases are added
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15926 from gatorsmile/saveAsTableFix5.
      
      (cherry picked from commit 9c42d4a7)
      Signed-off-by: default avatargatorsmile <gatorsmile@gmail.com>
      64b9de9c
    • Shixiong Zhu's avatar
      [SPARK-18373][SPARK-18529][SS][KAFKA] Make failOnDataLoss=false work with Spark jobs · bd338f60
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR adds `CachedKafkaConsumer.getAndIgnoreLostData` to handle corner cases of `failOnDataLoss=false`.
      
      It also resolves [SPARK-18529](https://issues.apache.org/jira/browse/SPARK-18529
      
      ) after refactoring codes: Timeout will throw a TimeoutException.
      
      ## How was this patch tested?
      
      Because I cannot find any way to manually control the Kafka server to clean up logs, it's impossible to write unit tests for each corner case. Therefore, I just created `test("stress test for failOnDataLoss=false")` which should cover most of corner cases.
      
      I also modified some existing tests to test for both `failOnDataLoss=false` and `failOnDataLoss=true` to make sure it doesn't break existing logic.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15820 from zsxwing/failOnDataLoss.
      
      (cherry picked from commit 2fd101b2)
      Signed-off-by: default avatarTathagata Das <tathagata.das1565@gmail.com>
      bd338f60
    • Burak Yavuz's avatar
      [SPARK-18465] Add 'IF EXISTS' clause to 'UNCACHE' to not throw exceptions when table doesn't exist · fb2ea54a
      Burak Yavuz authored
      
      ## What changes were proposed in this pull request?
      
      While this behavior is debatable, consider the following use case:
      ```sql
      UNCACHE TABLE foo;
      CACHE TABLE foo AS
      SELECT * FROM bar
      ```
      The command above fails the first time you run it. But I want to run the command above over and over again, and I don't want to change my code just for the first run of it.
      The issue is that subsequent `CACHE TABLE` commands do not overwrite the existing table.
      
      Now we can do:
      ```sql
      UNCACHE TABLE IF EXISTS foo;
      CACHE TABLE foo AS
      SELECT * FROM bar
      ```
      
      ## How was this patch tested?
      
      Unit tests
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15896 from brkyvz/uncache.
      
      (cherry picked from commit bdc8153e)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      fb2ea54a
    • Wenchen Fan's avatar
      [SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable once · fa360134
      Wenchen Fan authored
      
      ## What changes were proposed in this pull request?
      
      HiveExternalCatalog.listPartitions should only call `getTable` once, instead of calling it for every partitions.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15978 from cloud-fan/perf.
      
      (cherry picked from commit 702cd403)
      Signed-off-by: default avatarAndrew Or <andrewor14@gmail.com>
      fa360134
    • Nattavut Sutyanyong's avatar
      [SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result · 0e624e99
      Nattavut Sutyanyong authored
      
      ## What changes were proposed in this pull request?
      
      This PR blocks an incorrect result scenario in scalar subquery where there are GROUP BY column(s)
      that are not part of the correlated predicate(s).
      
      Example:
      // Incorrect result
      Seq(1).toDF("c1").createOrReplaceTempView("t1")
      Seq((1,1),(1,2)).toDF("c1","c2").createOrReplaceTempView("t2")
      sql("select (select sum(-1) from t2 where t1.c1=t2.c1 group by t2.c2) from t1").show
      
      // How can selecting a scalar subquery from a 1-row table return 2 rows?
      
      ## How was this patch tested?
      sql/test, catalyst/test
      new test case covering the reported problem is added to SubquerySuite.scala
      
      Author: Nattavut Sutyanyong <nsy.can@gmail.com>
      
      Closes #15936 from nsyca/scalarSubqueryIncorrect-1.
      
      (cherry picked from commit 45ea46b7)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      0e624e99
    • Wenchen Fan's avatar
      [SPARK-18519][SQL] map type can not be used in EqualTo · 0e60e4b8
      Wenchen Fan authored
      
      ## What changes were proposed in this pull request?
      
      Technically map type is not orderable, but can be used in equality comparison. However, due to the limitation of the current implementation, map type can't be used in equality comparison so that it can't be join key or grouping key.
      
      This PR makes this limitation explicit, to avoid wrong result.
      
      ## How was this patch tested?
      
      updated tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15956 from cloud-fan/map-type.
      
      (cherry picked from commit bb152cdf)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      0e60e4b8
    • hyukjinkwon's avatar
      [SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across... · 36cd10d1
      hyukjinkwon authored
      [SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation
      
      ## What changes were proposed in this pull request?
      
      It seems in Python, there are
      
      - `Note:`
      - `NOTE:`
      - `Note that`
      - `.. note::`
      
      This PR proposes to fix those to `.. note::` to be consistent.
      
      **Before**
      
      <img width="567" alt="2016-11-21 1 18 49" src="https://cloud.githubusercontent.com/assets/6477701/20464305/85144c86-af88-11e6-8ee9-90f584dd856c.png">
      
      <img width="617" alt="2016-11-21 12 42 43" src="https://cloud.githubusercontent.com/assets/6477701/20464263/27be5022-af88-11e6-8577-4bbca7cdf36c.png">
      
      **After**
      
      <img width="554" alt="2016-11-21 1 18 42" src="https://cloud.githubusercontent.com/assets/6477701/20464306/8fe48932-af88-11e6-83e1-fc3cbf74407d.png">
      
      <img width="628" alt="2016-11-21 12 42 51" src="https://cloud.githubusercontent.com/assets/6477701/20464264/2d3e156e-af88-11e6-93f3-cab8d8d02983.png
      
      ">
      
      ## How was this patch tested?
      
      The notes were found via
      
      ```bash
      grep -r "Note: " .
      grep -r "NOTE: " .
      grep -r "Note that " .
      ```
      
      And then fixed one by one comparing with API documentation.
      
      After that, manually tested via `make html` under `./python/docs`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15947 from HyukjinKwon/SPARK-18447.
      
      (cherry picked from commit 933a6548)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      36cd10d1
    • hyukjinkwon's avatar
      [SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation · 63aa01ff
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      It seems in R, there are
      
      - `Note:`
      - `NOTE:`
      - `Note that`
      
      This PR proposes to fix those to `Note:` to be consistent.
      
      **Before**
      
      ![2016-11-21 11 30 07](https://cloud.githubusercontent.com/assets/6477701/20468848/2f27b0fa-afde-11e6-89e3-993701269dbe.png)
      
      **After**
      
      ![2016-11-21 11 29 44](https://cloud.githubusercontent.com/assets/6477701/20468851/39469664-afde-11e6-9929-ad80be7fc405.png
      
      )
      
      ## How was this patch tested?
      
      The notes were found via
      
      ```bash
      grep -r "NOTE: " .
      grep -r "Note that " .
      ```
      
      And then fixed one by one comparing with API documentation.
      
      After that, manually tested via `sh create-docs.sh` under `./R`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15952 from HyukjinKwon/SPARK-18514.
      
      (cherry picked from commit 4922f9cd)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      63aa01ff
    • Yanbo Liang's avatar
      [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package. · c7021407
      Yanbo Liang authored
      
      ## What changes were proposed in this pull request?
      When running SparkR job in yarn-cluster mode, it will download Spark package from apache website which is not necessary.
      ```
      ./bin/spark-submit --master yarn-cluster ./examples/src/main/r/dataframe.R
      ```
      The following is output:
      ```
      Attaching package: ‘SparkR’
      
      The following objects are masked from ‘package:stats’:
      
          cov, filter, lag, na.omit, predict, sd, var, window
      
      The following objects are masked from ‘package:base’:
      
          as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
          rank, rbind, sample, startsWith, subset, summary, transform, union
      
      Spark not found in SPARK_HOME:
      Spark not found in the cache directory. Installation will start.
      MirrorUrl not provided.
      Looking for preferred site from apache website...
      ......
      ```
      There's no ```SPARK_HOME``` in yarn-cluster mode since the R process is in a remote host of the yarn cluster rather than in the client host. The JVM comes up first and the R process then connects to it. So in such cases we should never have to download Spark as Spark is already running.
      
      ## How was this patch tested?
      Offline test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15888 from yanboliang/spark-18444.
      
      (cherry picked from commit acb97157)
      Signed-off-by: default avatarYanbo Liang <ybliang8@gmail.com>
      c7021407
  5. Nov 21, 2016
  6. Nov 20, 2016
    • Takuya UESHIN's avatar
      [SPARK-18467][SQL] Extracts method for preparing arguments from StaticInvoke,... · fb4e6359
      Takuya UESHIN authored
      [SPARK-18467][SQL] Extracts method for preparing arguments from StaticInvoke, Invoke and NewInstance and modify to short circuit if arguments have null when `needNullCheck == true`.
      
      ## What changes were proposed in this pull request?
      
      This pr extracts method for preparing arguments from `StaticInvoke`, `Invoke` and `NewInstance` and modify to short circuit if arguments have `null` when `propageteNull == true`.
      
      The steps are as follows:
      
      1. Introduce `InvokeLike` to extract common logic from `StaticInvoke`, `Invoke` and `NewInstance` to prepare arguments.
      `StaticInvoke` and `Invoke` had a risk to exceed 64kb JVM limit to prepare arguments but after this patch they can handle them because they share the preparing code of NewInstance, which handles the limit well.
      
      2. Remove unneeded null checking and fix nullability of `NewInstance`.
      Avoid some of nullabilty checking which are not needed because the expression is not nullable.
      
      3. Modify to short circuit if arguments have `null` when `needNullCheck == true`.
      If `needNullCheck == true`, preparing arguments can be skipped if we found one of them is `null`, so modified to short circuit in the case.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #15901 from ueshin/issues/SPARK-18467.
      
      (cherry picked from commit 65854797)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      fb4e6359
    • Reynold Xin's avatar
      [HOTFIX][SQL] Fix DDLSuite failure. · f8662db7
      Reynold Xin authored
      
      (cherry picked from commit b625a36e)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      f8662db7
    • Herman van Hovell's avatar
      [SPARK-17732][SQL] Revert ALTER TABLE DROP PARTITION should support comparators · cffaf503
      Herman van Hovell authored
      This reverts commit 1126c319.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15948 from hvanhovell/SPARK-17732.
      cffaf503
    • hyukjinkwon's avatar
      [SPARK-3359][BUILD][DOCS] Print examples and disable group and tparam tags in javadoc · bc3e7b3b
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes/fixes two things.
      
      - Remove many errors to generate javadoc with Java8 from unrecognisable tags, `tparam` and `group`.
      
        ```
        [error] .../spark/mllib/target/java/org/apache/spark/ml/classification/Classifier.java:18: error: unknown tag: group
        [error]   /** group setParam */
        [error]       ^
        [error] .../spark/mllib/target/java/org/apache/spark/ml/classification/Classifier.java:8: error: unknown tag: tparam
        [error]  * tparam FeaturesType  Type of input features.  E.g., <code>Vector</code>
        [error]    ^
        ...
        ```
      
        It does not fully resolve the problem but remove many errors. It seems both `group` and `tparam` are unrecognisable in javadoc. It seems we can't print them pretty in javadoc in a way of `example` here because they appear differently (both examples can be found in http://spark.apache.org/docs/2.0.2/api/scala/index.html#org.apache.spark.ml.classification.Classifier).
      
      - Print `example` in javadoc.
        Currently, there are few `example` tag in several places.
      
        ```
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This operation might be used to evaluate a graph
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example We might use this operation to change the vertex values
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function might be used to initialize edge
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function might be used to initialize edge
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function might be used to initialize edge
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example We can use this function to compute the in-degree of each
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function is used to update the vertices with new values based on external data.
        ./graphx/src/main/scala/org/apache/spark/graphx/GraphLoader.scala:   * example Loads a file in the following format:
        ./graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala:   * example This function is used to update the vertices with new
        ./graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala:   * example This function can be used to filter the graph based on some property, without
        ./graphx/src/main/scala/org/apache/spark/graphx/Pregel.scala: * example We can use the Pregel abstraction to implement PageRank:
        ./graphx/src/main/scala/org/apache/spark/graphx/VertexRDD.scala: * example Construct a `VertexRDD` from a plain RDD:
        ./repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkCommandLine.scala: * example new SparkCommandLine(Nil).settings
        ./repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkIMain.scala:   * example addImports("org.apache.spark.SparkContext")
        ./sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala: * example {{{
        ```
      
      **Before**
      
        <img width="505" alt="2016-11-20 2 43 23" src="https://cloud.githubusercontent.com/assets/6477701/20457285/26f07e1c-aecb-11e6-9ae9-d9dee66845f4.png">
      
      **After**
        <img width="499" alt="2016-11-20 1 27 17" src="https://cloud.githubusercontent.com/assets/6477701/20457240/409124e4-aeca-11e6-9a91-0ba514148b52.png
      
      ">
      
      ## How was this patch tested?
      
      Maunally tested by `jekyll build` with Java 7 and 8
      
      ```
      java version "1.7.0_80"
      Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
      Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
      ```
      
      ```
      java version "1.8.0_45"
      Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
      Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
      ```
      
      Note: this does not make sbt unidoc suceed with Java 8 yet but it reduces the number of errors with Java 8.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15939 from HyukjinKwon/SPARK-3359-javadoc.
      
      (cherry picked from commit c528812c)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      bc3e7b3b
  7. Nov 19, 2016
Loading