Skip to content
Snippets Groups Projects
  1. Jun 06, 2015
  2. Jun 05, 2015
    • Dong Wang's avatar
      [SPARK-6964] [SQL] Support Cancellation in the Thrift Server · eb19d3f7
      Dong Wang authored
      Support runInBackground in SparkExecuteStatementOperation, and add cancellation
      
      Author: Dong Wang <dong@databricks.com>
      
      Closes #6207 from dongwang218/SPARK-6964-jdbc-cancel and squashes the following commits:
      
      687c113 [Dong Wang] fix 100 characters
      7bfa2a7 [Dong Wang] fix merge
      380480f [Dong Wang] fix for liancheng's comments
      eb3e385 [Dong Wang] small nit
      341885b [Dong Wang] small fix
      3d8ebf8 [Dong Wang] add spark.sql.hive.thriftServer.async flag
      04142c3 [Dong Wang] set SQLSession for async execution
      184ec35 [Dong Wang] keep hive conf
      819ae03 [Dong Wang] [SPARK-6964][SQL][WIP] Support Cancellation in the Thrift Server
      eb19d3f7
    • Reynold Xin's avatar
      [SPARK-8114][SQL] Remove some wildcard import on TestSQLContext._ cont'd. · 6ebe419f
      Reynold Xin authored
      Fixed the following packages:
      sql.columnar
      sql.jdbc
      sql.json
      sql.parquet
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6667 from rxin/testsqlcontext_wildcard and squashes the following commits:
      
      134a776 [Reynold Xin] Fixed compilation break.
      6da7b69 [Reynold Xin] [SPARK-8114][SQL] Remove some wildcard import on TestSQLContext._ cont'd.
      6ebe419f
    • Shivaram Venkataraman's avatar
      [SPARK-8085] [SPARKR] Support user-specified schema in read.df · 12f5eaee
      Shivaram Venkataraman authored
      cc davies sun-rui
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #6620 from shivaram/sparkr-read-schema and squashes the following commits:
      
      16a6726 [Shivaram Venkataraman] Fix loadDF to pass schema Also add a unit test
      a229877 [Shivaram Venkataraman] Use wrapper function to DataFrameReader
      ee70ba8 [Shivaram Venkataraman] Support user-specified schema in read.df
      12f5eaee
    • Cheng Lian's avatar
      [SQL] Simplifies binary node pattern matching · bc0d76a2
      Cheng Lian authored
      This PR is a simpler version of #2764, and adds `unapply` methods to the following binary nodes for simpler pattern matching:
      
      - `BinaryExpression`
      - `BinaryComparison`
      - `BinaryArithmetics`
      
      This enables nested pattern matching for binary nodes. For example, the following pattern matching
      
      ```scala
      case p: BinaryComparison if p.left.dataType == StringType &&
                                  p.right.dataType == DateType =>
        p.makeCopy(Array(p.left, Cast(p.right, StringType)))
      ```
      
      can be simplified to
      
      ```scala
      case p  BinaryComparison(l  StringType(), r  DateType()) =>
        p.makeCopy(Array(l, Cast(r, StringType)))
      ```
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6537 from liancheng/binary-node-patmat and squashes the following commits:
      
      a3bf5fe [Cheng Lian] Fixes compilation error introduced while rebasing
      b738986 [Cheng Lian] Renames `l`/`r` to `left`/`right` or `lhs`/`rhs`
      14900ae [Cheng Lian] Simplifies binary node pattern matching
      bc0d76a2
    • Reynold Xin's avatar
      [SPARK-8114][SQL] Remove some wildcard import on TestSQLContext._ · 8f16b94a
      Reynold Xin authored
      I kept some of the sql import there to avoid changing too many lines.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6661 from rxin/remove-wildcard-import-sqlcontext and squashes the following commits:
      
      c265347 [Reynold Xin] Fixed ListTablesSuite failure.
      de9d491 [Reynold Xin] Fixed tests.
      73b5365 [Reynold Xin] Mima.
      8f6b642 [Reynold Xin] Fixed style violation.
      443f6e8 [Reynold Xin] [SPARK-8113][SQL] Remove some wildcard import on TestSQLContext._
      8f16b94a
  3. Jun 04, 2015
    • Reynold Xin's avatar
      [SPARK-7440][SQL] Remove physical Distinct operator in favor of Aggregate · 2bcdf8c2
      Reynold Xin authored
      This patch replaces Distinct with Aggregate in the optimizer, so Distinct will become
      more efficient over time as we optimize Aggregate (via Tungsten).
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6637 from rxin/replace-distinct and squashes the following commits:
      
      b3cc50e [Reynold Xin] Mima excludes.
      93d6117 [Reynold Xin] Code review feedback.
      87e4741 [Reynold Xin] [SPARK-7440][SQL] Remove physical Distinct operator in favor of Aggregate.
      2bcdf8c2
    • Reynold Xin's avatar
    • Cheolsoo Park's avatar
      [SPARK-6909][SQL] Remove Hive Shim code · 0526fea4
      Cheolsoo Park authored
      This is a follow-up on #6393. I am removing the following files in this PR.
      ```
      ./sql/hive/v0.13.1/src/main/scala/org/apache/spark/sql/hive/Shim13.scala
      ./sql/hive-thriftserver/v0.13.1/src/main/scala/org/apache/spark/sql/hive/thriftserver/Shim13.scala
      ```
      Basically, I re-factored the shim code as follows-
      * Rewrote code directly with Hive 0.13 methods, or
      * Converted code into private methods, or
      * Extracted code into separate classes
      
      But for leftover code that didn't fit in any of these cases, I created a HiveShim object. For eg, helper functions which wrap Hive 0.13 methods to work around Hive bugs are placed here.
      
      Author: Cheolsoo Park <cheolsoop@netflix.com>
      
      Closes #6604 from piaozhexiu/SPARK-6909 and squashes the following commits:
      
      5dccc20 [Cheolsoo Park] Remove hive shim code
      0526fea4
    • Thomas Omans's avatar
      [SPARK-7743] [SQL] Parquet 1.7 · cd3176bd
      Thomas Omans authored
      Resolves [SPARK-7743](https://issues.apache.org/jira/browse/SPARK-7743).
      
      Trivial changes of versions, package names, as well as a small issue in `ParquetTableOperations.scala`
      
      ```diff
      -    val readContext = getReadSupport(configuration).init(
      +    val readContext = ParquetInputFormat.getReadSupportInstance(configuration).init(
      ```
      
      Since ParquetInputFormat.getReadSupport was made package private in the latest release.
      
      Thanks
      -- Thomas Omans
      
      Author: Thomas Omans <tomans@cj.com>
      
      Closes #6597 from eggsby/SPARK-7743 and squashes the following commits:
      
      2df0d1b [Thomas Omans] [SPARK-7743] [SQL] Upgrading parquet version to 1.7.0
      cd3176bd
    • Mike Dusenberry's avatar
      [SPARK-7969] [SQL] Added a DataFrame.drop function that accepts a Column reference. · df7da07a
      Mike Dusenberry authored
      Added a `DataFrame.drop` function that accepts a `Column` reference rather than a `String`, and added associated unit tests.  Basically iterates through the `DataFrame` to find a column with an expression that is equivalent to that of the `Column` argument supplied to the function.
      
      Author: Mike Dusenberry <dusenberrymw@gmail.com>
      
      Closes #6585 from dusenberrymw/SPARK-7969_Drop_method_on_Dataframes_should_handle_Column and squashes the following commits:
      
      514727a [Mike Dusenberry] Updating the @since tag of the drop(Column) function doc to reflect version 1.4.1 instead of 1.4.0.
      2f1bb4e [Mike Dusenberry] Adding an additional assert statement to the 'drop column after join' unit test in order to make sure the correct column was indeed left over.
      6bf7c0e [Mike Dusenberry] Minor code formatting change.
      e583888 [Mike Dusenberry] Adding more Python doctests for the df.drop with column reference function to test joined datasets that have columns with the same name.
      5f74401 [Mike Dusenberry] Updating DataFrame.drop with column reference function to use logicalPlan.output to prevent ambiguities resulting from columns with the same name. Also added associated unit tests for joined datasets with duplicate column names.
      4b8bbe8 [Mike Dusenberry] Adding Python support for Dataframe.drop with a Column reference.
      986129c [Mike Dusenberry] Added a DataFrame.drop function that accepts a Column reference rather than a String, and added associated unit tests.  Basically iterates through the DataFrame to find a column with an expression that is equivalent to one supplied to the function.
      df7da07a
    • Davies Liu's avatar
      [SPARK-7956] [SQL] Use Janino to compile SQL expressions into bytecode · c8709dcf
      Davies Liu authored
      In order to reduce the overhead of codegen, this PR switch to use Janino to compile SQL expressions into bytecode.
      
      After this, the time used to compile a SQL expression is decreased from 100ms to 5ms, which is necessary to turn on codegen for general workload, also tests.
      
      cc rxin
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6479 from davies/janino and squashes the following commits:
      
      cc689f5 [Davies Liu] remove globalLock
      262d848 [Davies Liu] Merge branch 'master' of github.com:apache/spark into janino
      eec3a33 [Davies Liu] address comments from Josh
      f37c8c3 [Davies Liu] fix DecimalType and cast to String
      202298b [Davies Liu] Merge branch 'master' of github.com:apache/spark into janino
      a21e968 [Davies Liu] fix style
      0ed3dc6 [Davies Liu] Merge branch 'master' of github.com:apache/spark into janino
      551a851 [Davies Liu] fix tests
      c3bdffa [Davies Liu] remove print
      6089ce5 [Davies Liu] change logging level
      7e46ac3 [Davies Liu] fix style
      d8f0f6c [Davies Liu] Merge branch 'master' of github.com:apache/spark into janino
      da4926a [Davies Liu] fix tests
      03660f3 [Davies Liu] WIP: use Janino to compile Java source
      f2629cd [Davies Liu] Merge branch 'master' of github.com:apache/spark into janino
      f7d66cf [Davies Liu] use template based string for codegen
      c8709dcf
  4. Jun 03, 2015
    • Reynold Xin's avatar
    • Reynold Xin's avatar
      [SPARK-8074] Parquet should throw AnalysisException during setup for data... · 939e4f3d
      Reynold Xin authored
      [SPARK-8074] Parquet should throw AnalysisException during setup for data type/name related failures.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6608 from rxin/parquet-analysis and squashes the following commits:
      
      b5dc8e2 [Reynold Xin] Code review feedback.
      5617cf6 [Reynold Xin] [SPARK-8074] Parquet should throw AnalysisException during setup for data type/name related failures.
      939e4f3d
    • animesh's avatar
      [SPARK-7980] [SQL] Support SQLContext.range(end) · d053a31b
      animesh authored
      1. range() overloaded in SQLContext.scala
      2. range() modified in python sql context.py
      3. Tests added accordingly in DataFrameSuite.scala and python sql tests.py
      
      Author: animesh <animesh@apache.spark>
      
      Closes #6609 from animeshbaranawal/SPARK-7980 and squashes the following commits:
      
      935899c [animesh] SPARK-7980:python+scala changes
      d053a31b
    • Patrick Wendell's avatar
      [SPARK-7801] [BUILD] Updating versions to SPARK 1.5.0 · 2c4d550e
      Patrick Wendell authored
      Author: Patrick Wendell <patrick@databricks.com>
      
      Closes #6328 from pwendell/spark-1.5-update and squashes the following commits:
      
      2f42d02 [Patrick Wendell] A few more excludes
      4bebcf0 [Patrick Wendell] Update to RC4
      61aaf46 [Patrick Wendell] Using new release candidate
      55f1610 [Patrick Wendell] Another exclude
      04b4f04 [Patrick Wendell] More issues with transient 1.4 changes
      36f549b [Patrick Wendell] [SPARK-7801] [BUILD] Updating versions to SPARK 1.5.0
      2c4d550e
    • Yin Huai's avatar
      [SPARK-7973] [SQL] Increase the timeout of two CliSuite tests. · f1646e10
      Yin Huai authored
      https://issues.apache.org/jira/browse/SPARK-7973
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6525 from yhuai/SPARK-7973 and squashes the following commits:
      
      763b821 [Yin Huai] Also change the timeout of "Single command with -e" to 2 minutes.
      e598a08 [Yin Huai] Increase the timeout to 3 minutes.
      f1646e10
    • Wenchen Fan's avatar
      [SPARK-7562][SPARK-6444][SQL] Improve error reporting for expression data type mismatch · d38cf217
      Wenchen Fan authored
      It seems hard to find a common pattern of checking types in `Expression`. Sometimes we know what input types we need(like `And`, we know we need two booleans), sometimes we just have some rules(like `Add`, we need 2 numeric types which are equal). So I defined a general interface `checkInputDataTypes` in `Expression` which returns a `TypeCheckResult`. `TypeCheckResult` can tell whether this expression passes the type checking or what the type mismatch is.
      
      This PR mainly works on apply input types checking for arithmetic and predicate expressions.
      
      TODO: apply type checking interface to more expressions.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #6405 from cloud-fan/6444 and squashes the following commits:
      
      b5ff31b [Wenchen Fan] address comments
      b917275 [Wenchen Fan] rebase
      39929d9 [Wenchen Fan] add todo
      0808fd2 [Wenchen Fan] make constrcutor of TypeCheckResult private
      3bee157 [Wenchen Fan] and decimal type coercion rule for binary comparison
      8883025 [Wenchen Fan] apply type check interface to CaseWhen
      cffb67c [Wenchen Fan] to have resolved call the data type check function
      6eaadff [Wenchen Fan] add equal type constraint to EqualTo
      3affbd8 [Wenchen Fan] more fixes
      654d46a [Wenchen Fan] improve tests
      e0a3628 [Wenchen Fan] improve error message
      1524ff6 [Wenchen Fan] fix style
      69ca3fe [Wenchen Fan] add error message and tests
      c71d02c [Wenchen Fan] fix hive tests
      6491721 [Wenchen Fan] use value class TypeCheckResult
      7ae76b9 [Wenchen Fan] address comments
      cb77e4f [Wenchen Fan] Improve error reporting for expression data type mismatch
      d38cf217
    • Josh Rosen's avatar
      [SPARK-7691] [SQL] Refactor CatalystTypeConverter to use type-specific row accessors · cafd5056
      Josh Rosen authored
      This patch significantly refactors CatalystTypeConverters to both clean up the code and enable these conversions to work with future Project Tungsten features.
      
      At a high level, I've reorganized the code so that all functions dealing with the same type are grouped together into type-specific subclasses of `CatalystTypeConveter`.  In addition, I've added new methods that allow the Catalyst Row -> Scala Row conversions to access the Catalyst row's fields through type-specific `getTYPE()` methods rather than the generic `get()` / `Row.apply` methods.  This refactoring is a blocker to being able to unit test new operators that I'm developing as part of Project Tungsten, since those operators may output `UnsafeRow` instances which don't support the generic `get()`.
      
      The stricter type usage of types here has uncovered some bugs in other parts of Spark SQL:
      
      - #6217: DescribeCommand is assigned wrong output attributes in SparkStrategies
      - #6218: DataFrame.describe() should cast all aggregates to String
      - #6400: Use output schema, not relation schema, for data source input conversion
      
      Spark SQL current has undefined behavior for what happens when you try to create a DataFrame from user-specified rows whose values don't match the declared schema.  According to the `createDataFrame()` Scaladoc:
      
      >  It is important to make sure that the structure of every [[Row]] of the provided RDD matches the provided schema. Otherwise, there will be runtime exception.
      
      Given this, it sounds like it's technically not a break of our API contract to fail-fast when the data types don't match. However, there appear to be many cases where we don't fail even though the types don't match. For example, `JavaHashingTFSuite.hasingTF` passes a column of integers values for a "label" column which is supposed to contain floats.  This column isn't actually read or modified as part of query processing, so its actual concrete type doesn't seem to matter. In other cases, there could be situations where we have generic numeric aggregates that tolerate being called with different numeric types than the schema specified, but this can be okay due to numeric conversions.
      
      In the long run, we will probably want to come up with precise semantics for implicit type conversions / widening when converting Java / Scala rows to Catalyst rows.  Until then, though, I think that failing fast with a ClassCastException is a reasonable behavior; this is the approach taken in this patch.  Note that certain optimizations in the inbound conversion functions for primitive types mean that we'll probably preserve the old undefined behavior in a majority of cases.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #6222 from JoshRosen/catalyst-converters-refactoring and squashes the following commits:
      
      740341b [Josh Rosen] Optimize method dispatch for primitive type conversions
      befc613 [Josh Rosen] Add tests to document Option-handling behavior.
      5989593 [Josh Rosen] Use new SparkFunSuite base in CatalystTypeConvertersSuite
      6edf7f8 [Josh Rosen] Re-add convertToScala(), since a Hive test still needs it
      3f7b2d8 [Josh Rosen] Initialize converters lazily so that the attributes are resolved first
      6ad0ebb [Josh Rosen] Fix JavaHashingTFSuite ClassCastException
      677ff27 [Josh Rosen] Fix null handling bug; add tests.
      8033d4c [Josh Rosen] Fix serialization error in UserDefinedGenerator.
      85bba9d [Josh Rosen] Fix wrong input data in InMemoryColumnarQuerySuite
      9c0e4e1 [Josh Rosen] Remove last use of convertToScala().
      ae3278d [Josh Rosen] Throw ClassCastException errors during inbound conversions.
      7ca7fcb [Josh Rosen] Comments and cleanup
      1e87a45 [Josh Rosen] WIP refactoring of CatalystTypeConverters
      cafd5056
  5. Jun 02, 2015
    • Cheng Lian's avatar
      [SQL] [TEST] [MINOR] Follow-up of PR #6493, use Guava API to ensure Java 6 friendliness · 5cd6a63d
      Cheng Lian authored
      This is a follow-up of PR #6493, which has been reverted in branch-1.4 because it uses Java 7 specific APIs and breaks Java 6 build. This PR replaces those APIs with equivalent Guava ones to ensure Java 6 friendliness.
      
      cc andrewor14 pwendell, this should also be back ported to branch-1.4.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6547 from liancheng/override-log4j and squashes the following commits:
      
      c900cfd [Cheng Lian] Addresses Shixiong's comment
      72da795 [Cheng Lian] Uses Guava API to ensure Java 6 friendliness
      5cd6a63d
    • Cheng Lian's avatar
      [SPARK-8014] [SQL] Avoid premature metadata discovery when writing a... · 686a45f0
      Cheng Lian authored
      [SPARK-8014] [SQL] Avoid premature metadata discovery when writing a HadoopFsRelation with a save mode other than Append
      
      The current code references the schema of the DataFrame to be written before checking save mode. This triggers expensive metadata discovery prematurely. For save mode other than `Append`, this metadata discovery is useless since we either ignore the result (for `Ignore` and `ErrorIfExists`) or delete existing files (for `Overwrite`) later.
      
      This PR fixes this issue by deferring metadata discovery after save mode checking.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6583 from liancheng/spark-8014 and squashes the following commits:
      
      1aafabd [Cheng Lian] Updates comments
      088abaa [Cheng Lian] Avoids schema merging and partition discovery when data schema and partition schema are defined
      8fbd93f [Cheng Lian] Fixes SPARK-8014
      686a45f0
    • Cheng Lian's avatar
      [SPARK-8037] [SQL] Ignores files whose name starts with dot in HadoopFsRelation · 1bb5d716
      Cheng Lian authored
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6581 from liancheng/spark-8037 and squashes the following commits:
      
      d08e97b [Cheng Lian] Ignores files whose name starts with dot in HadoopFsRelation
      1bb5d716
    • Yin Huai's avatar
      [SPARK-8023][SQL] Add "deterministic" attribute to Expression to avoid... · 0f80990b
      Yin Huai authored
      [SPARK-8023][SQL] Add "deterministic" attribute to Expression to avoid collapsing nondeterministic projects.
      
      This closes #6570.
      
      Author: Yin Huai <yhuai@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6573 from rxin/deterministic and squashes the following commits:
      
      356cd22 [Reynold Xin] Added unit test for the optimizer.
      da3fde1 [Reynold Xin] Merge pull request #6570 from yhuai/SPARK-8023
      da56200 [Yin Huai] Comments.
      e38f264 [Yin Huai] Comment.
      f9d6a73 [Yin Huai] Add a deterministic method to Expression.
      0f80990b
    • Yin Huai's avatar
      [SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make metadataHive get... · 7b7f7b6c
      Yin Huai authored
      [SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make metadataHive get constructed too early
      
      https://issues.apache.org/jira/browse/SPARK-8020
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6571 from yhuai/SPARK-8020-1 and squashes the following commits:
      
      0398f5b [Yin Huai] First populate the SQLConf and then construct executionHive and metadataHive.
      7b7f7b6c
    • Davies Liu's avatar
      [SPARK-6917] [SQL] DecimalType is not read back when non-native type exists · bcb47ad7
      Davies Liu authored
      cc yhuai
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6558 from davies/decimalType and squashes the following commits:
      
      c877ca8 [Davies Liu] Update ParquetConverter.scala
      48cc57c [Davies Liu] Update ParquetConverter.scala
      b43845c [Davies Liu] add test
      3b4a94f [Davies Liu] DecimalType is not read back when non-native type exists
      bcb47ad7
  6. Jun 01, 2015
  7. May 31, 2015
    • Wenchen Fan's avatar
      [SPARK-7952][SPARK-7984][SQL] equality check between boolean type and numeric type is broken. · a0e46a0d
      Wenchen Fan authored
      The origin code has several problems:
      * `true <=> 1` will return false as we didn't set a rule to handle it.
      * `true = a` where `a` is not `Literal` and its value is 1, will return false as we only handle literal values.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #6505 from cloud-fan/tmp1 and squashes the following commits:
      
      77f0f39 [Wenchen Fan] minor fix
      b6401ba [Wenchen Fan] add type coercion for CaseKeyWhen and address comments
      ebc8c61 [Wenchen Fan] use SQLTestUtils and If
      625973c [Wenchen Fan] improve
      9ba2130 [Wenchen Fan] address comments
      fc0d741 [Wenchen Fan] fix style
      2846a04 [Wenchen Fan] fix 7952
      a0e46a0d
    • Reynold Xin's avatar
      [SPARK-3850] Turn style checker on for trailing whitespaces. · 866652c9
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6541 from rxin/trailing-whitespace-on and squashes the following commits:
      
      f72ebe4 [Reynold Xin] [SPARK-3850] Turn style checker on for trailing whitespaces.
      866652c9
    • Reynold Xin's avatar
      [SPARK-3850] Trim trailing spaces for SQL. · 63a50be1
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6535 from rxin/whitespace-sql and squashes the following commits:
      
      de50316 [Reynold Xin] [SPARK-3850] Trim trailing spaces for SQL.
      63a50be1
    • Reynold Xin's avatar
      [SPARK-7975] Add style checker to disallow overriding equals covariantly. · 7896e99b
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      This patch had conflicts when merged, resolved by
      Committer: Reynold Xin <rxin@databricks.com>
      
      Closes #6527 from rxin/covariant-equals and squashes the following commits:
      
      e7d7784 [Reynold Xin] [SPARK-7975] Enforce CovariantEqualsChecker
      7896e99b
    • Cheng Lian's avatar
      [SQL] [MINOR] Adds @deprecated Scaladoc entry for SchemaRDD · 8764dcce
      Cheng Lian authored
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6529 from liancheng/schemardd-deprecation-fix and squashes the following commits:
      
      49765c2 [Cheng Lian] Adds @deprecated Scaladoc entry for SchemaRDD
      8764dcce
  8. May 30, 2015
    • Cheng Lian's avatar
      [SQL] [MINOR] Fixes a minor comment mistake in IsolatedClientLoader · f7fe9e47
      Cheng Lian authored
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6521 from liancheng/classloader-comment-fix and squashes the following commits:
      
      fc09606 [Cheng Lian] Addresses @srowen's comment
      59945c5 [Cheng Lian] Fixes a minor comment mistake in IsolatedClientLoader
      f7fe9e47
    • Reynold Xin's avatar
      [SPARK-7971] Add JavaDoc style deprecation for deprecated DataFrame methods · c63e1a74
      Reynold Xin authored
      Scala deprecated annotation actually doesn't show up in JavaDoc.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6523 from rxin/df-deprecated-javadoc and squashes the following commits:
      
      26da2b2 [Reynold Xin] [SPARK-7971] Add JavaDoc style deprecation for deprecated DataFrame methods.
      c63e1a74
    • Reynold Xin's avatar
      [SQL] Tighten up visibility for JavaDoc. · 14b314dc
      Reynold Xin authored
      I went through all the JavaDocs and tightened up visibility.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6526 from rxin/sql-1.4-visibility-for-docs and squashes the following commits:
      
      bc37d1e [Reynold Xin] Tighten up visibility for JavaDoc.
      14b314dc
Loading