Skip to content
Snippets Groups Projects
  1. Apr 21, 2016
    • Liang-Chi Hsieh's avatar
      [HOTFIX] Remove wrong DDL tests · 4ac6e75c
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      As we moved most parsing rules to `SparkSqlParser`, some tests expected to throw exception are not correct anymore.
      
      ## How was this patch tested?
      `DDLCommandSuite`
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #12572 from viirya/hotfix-ddl.
      4ac6e75c
    • Bryan Cutler's avatar
      [SPARK-14779][CORE] Corrected log message in Worker case KillExecutor · d53a51c1
      Bryan Cutler authored
      In o.a.s.deploy.worker.Worker.scala, when receiving a KillExecutor message from an invalid Master, fixed typo by changing the log message to read "..attemped to kill executor.."
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #12546 from BryanCutler/worker-killexecutor-log-message.
      d53a51c1
    • hyukjinkwon's avatar
      [SPARK-14787][SQL] Upgrade Joda-Time library from 2.9 to 2.9.3 · ec2a2760
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      https://issues.apache.org/jira/browse/SPARK-14787
      
      The possible problems are described in the JIRA above. Please refer this if you are wondering the purpose of this PR.
      
      This PR upgrades Joda-Time library from 2.9 to 2.9.3.
      
      ## How was this patch tested?
      
      `sbt scalastyle` and Jenkins tests in this PR.
      
      closes #11847
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #12552 from HyukjinKwon/SPARK-14787.
      ec2a2760
    • Arash Parsa's avatar
      [SPARK-14739][PYSPARK] Fix Vectors parser bugs · 2b8906c4
      Arash Parsa authored
      ## What changes were proposed in this pull request?
      
      The PySpark deserialization has a bug that shows while deserializing all zero sparse vectors. This fix filters out empty string tokens before casting, hence properly stringified SparseVectors successfully get parsed.
      
      ## How was this patch tested?
      
      Standard unit-tests similar to other methods.
      
      Author: Arash Parsa <arash@ip-192-168-50-106.ec2.internal>
      Author: Arash Parsa <arashpa@gmail.com>
      Author: Vishnu Prasad <vishnu667@gmail.com>
      Author: Vishnu Prasad S <vishnu667@gmail.com>
      
      Closes #12516 from arashpa/SPARK-14739.
      2b8906c4
    • Sean Owen's avatar
      [SPARK-8393][STREAMING] JavaStreamingContext#awaitTermination() throws... · 8bd05c9d
      Sean Owen authored
      [SPARK-8393][STREAMING] JavaStreamingContext#awaitTermination() throws non-declared InterruptedException
      
      ## What changes were proposed in this pull request?
      
      `JavaStreamingContext.awaitTermination` methods should be declared as `throws[InterruptedException]` so that this exception can be handled in Java code. Note this is not just a doc change, but an API change, since now (in Java) the method has a checked exception to handle. All await-like methods in Java APIs behave this way, so seems worthwhile for 2.0.
      
      ## How was this patch tested?
      
      Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #12418 from srowen/SPARK-8393.
      8bd05c9d
    • Wenchen Fan's avatar
      [SPARK-14753][CORE] remove internal flag in Accumulable · cb51680d
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      the `Accumulable.internal` flag is only used to avoid registering internal accumulators for 2 certain cases:
      
      1. `TaskMetrics.createTempShuffleReadMetrics`: the accumulators in the temp shuffle read metrics should not be registered.
      2. `TaskMetrics.fromAccumulatorUpdates`: the created task metrics is only used to post event, accumulators inside it should not be registered.
      
      For 1, we can create a `TempShuffleReadMetrics` that don't create accumulators, just keep the data and merge it at last.
      For 2, we can un-register these accumulators immediately.
      
      TODO: remove `internal` flag in `AccumulableInfo` with followup PR
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12525 from cloud-fan/acc.
      cb51680d
    • Reynold Xin's avatar
      [SPARK-14794][SQL] Don't pass analyze command into Hive · 228128ce
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      We shouldn't pass analyze command to Hive because some of those would require running MapReduce jobs. For now, let's just always run the no scan analyze.
      
      ## How was this patch tested?
      Updated test case to reflect this change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12558 from rxin/parser-analyze.
      228128ce
    • Reynold Xin's avatar
      [HOTFIX] Disable flaky tests · 3b9fd517
      Reynold Xin authored
      3b9fd517
    • Reynold Xin's avatar
      [SPARK-14792][SQL] Move as many parsing rules as possible into SQL parser · 77d847dd
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch moves as many parsing rules as possible into SQL parser. There are only three more left after this patch: (1) run native command, (2) analyze, and (3) script IO. These 3 will be dealt with in a follow-up PR.
      
      ## How was this patch tested?
      No test change. This simply moves code around.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12556 from rxin/SPARK-14792.
      77d847dd
    • Josh Rosen's avatar
      [SPARK-14786] Remove hive-cli dependency from hive subproject · cfe472a3
      Josh Rosen authored
      The `hive` subproject currently depends on `hive-cli` in order to perform a check to see whether a `SessionState` is an instance of `org.apache.hadoop.hive.cli.CliSessionState` (see #9589). The introduction of this `hive-cli` dependency has caused problems for users whose Hive metastore JAR classpaths don't include the `hive-cli` classes (such as in #11495).
      
      This patch removes this dependency on `hive-cli` and replaces the `isInstanceOf` check by reflection. I added a Maven Enforcer rule to ban `hive-cli` from the `hive` subproject in order to make sure that this dependency is not accidentally reintroduced.
      
      /cc rxin yhuai adrian-wang preecet
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #12551 from JoshRosen/remove-hive-cli-dep-from-hive-subproject.
      cfe472a3
  2. Apr 20, 2016
    • Reynold Xin's avatar
      [SPARK-14782][SPARK-14778][SQL] Remove HiveConf dependency from HiveSqlAstBuilder · 80458141
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      The patch removes HiveConf dependency from HiveSqlAstBuilder. This is required in order to merge HiveSqlParser and SparkSqlAstBuilder, which would require getting rid of the Hive specific dependencies in HiveSqlParser.
      
      This patch also accomplishes [SPARK-14778] Remove HiveSessionState.substitutor.
      
      ## How was this patch tested?
      This should be covered by existing tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12550 from rxin/SPARK-14782.
      80458141
    • Josh Rosen's avatar
      [HOTFIX] Ignore all Docker integration tests · 90933e2a
      Josh Rosen authored
      The Docker integration tests are failing very often (https://spark-tests.appspot.com/failed-tests) so I think we should disable these suites for now until we have time to improve them.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #12549 from JoshRosen/ignore-all-docker-tests.
      90933e2a
    • Reynold Xin's avatar
      [SPARK-14775][SQL] Remove TestHiveSparkSession.rewritePaths · 24f338ba
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      The path rewrite in TestHiveSparkSession is pretty hacky. I think we can remove those complexity and just do a string replacement when we read the query files in. This would remove the overloading of runNativeSql in TestHive, which will simplify the removal of Hive specific variable substitution.
      
      ## How was this patch tested?
      This is a small test refactoring to simplify test infrastructure.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12543 from rxin/SPARK-14775.
      24f338ba
    • Marcelo Vanzin's avatar
      [SPARK-14602][YARN] Use SparkConf to propagate the list of cached files. · f47dbf27
      Marcelo Vanzin authored
      This change avoids using the environment to pass this information, since
      with many jars it's easy to hit limits on certain OSes. Instead, it encodes
      the information into the Spark configuration propagated to the AM.
      
      The first problem that needed to be solved is a chicken & egg issue: the
      config file is distributed using the cache, and it needs to contain information
      about the files that are being distributed. To solve that, the code now treats
      the config archive especially, and uses slightly different code to distribute
      it, so that only its cache path needs to be saved to the config file.
      
      The second problem is that the extra information would show up in the Web UI,
      which made the environment tab even more noisy than it already is when lots
      of jars are listed. This is solved by two changes: the list of cached files
      is now read only once in the AM, and propagated down to the ExecutorRunnable
      code (which actually sends the list to the NMs when starting containers). The
      second change is to unset those config entries after the list is read, so that
      the SparkContext never sees them.
      
      Tested with both client and cluster mode by running "run-example SparkPi". This
      uploads a whole lot of files when run from a build dir (instead of a distribution,
      where the list is cleaned up), and I verified that the configs do not show
      up in the UI.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #12487 from vanzin/SPARK-14602.
      f47dbf27
    • Reynold Xin's avatar
      [SPARK-14769][SQL] Create built-in functionality for variable substitution · 334c293e
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      In order to fully merge the Hive parser and the SQL parser, we'd need to support variable substitution in Spark. The implementation of the substitute algorithm is mostly copied from Hive, but I simplified the overall structure quite a bit and added more comprehensive test coverage.
      
      Note that this pull request does not yet use this functionality anywhere.
      
      ## How was this patch tested?
      Added VariableSubstitutionSuite for unit tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12538 from rxin/SPARK-14769.
      334c293e
    • Reynold Xin's avatar
      [SPARK-14770][SQL] Remove unused queries in hive module test resources · b28fe448
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      We currently have five folders in queries: clientcompare, clientnegative, clientpositive, negative, and positive. Only clientpositive is used. We can remove the rest.
      
      ## How was this patch tested?
      N/A - removing unused test resources.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12540 from rxin/SPARK-14770.
      b28fe448
    • Subhobrata Dey's avatar
      [SPARK-14749][SQL, TESTS] PlannerSuite failed when it run individually · fd826819
      Subhobrata Dey authored
      ## What changes were proposed in this pull request?
      
      3 testcases namely,
      
      ```
      "count is partially aggregated"
      "count distinct is partially aggregated"
      "mixed aggregates are partially aggregated"
      ```
      
      were failing when running PlannerSuite individually.
      The PR provides a fix for this.
      
      ## How was this patch tested?
      
      unit tests
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: Subhobrata Dey <sbcd90@gmail.com>
      
      Closes #12532 from sbcd90/plannersuitetestsfix.
      fd826819
    • Sheamus K. Parkes's avatar
      [SPARK-13842] [PYSPARK] pyspark.sql.types.StructType accessor enhancements · e7791c4f
      Sheamus K. Parkes authored
      ## What changes were proposed in this pull request?
      
      Expand the possible ways to interact with the contents of a `pyspark.sql.types.StructType` instance.
        - Iterating a `StructType` will iterate its fields
          - `[field.name for field in my_structtype]`
        - Indexing with a string will return a field by name
          - `my_structtype['my_field_name']`
        - Indexing with an integer will return a field by position
          - `my_structtype[0]`
        - Indexing with a slice will return a new `StructType` with just the chosen fields:
          - `my_structtype[1:3]`
        - The length is the number of fields (should also provide "truthiness" for free)
          - `len(my_structtype) == 2`
      
      ## How was this patch tested?
      
      Extended the unit test coverage in the accompanying `tests.py`.
      
      Author: Sheamus K. Parkes <shea.parkes@milliman.com>
      
      Closes #12251 from skparkes/pyspark-structtype-enhance.
      e7791c4f
    • Shixiong Zhu's avatar
      [SPARK-14678][SQL] Add a file sink log to support versioning and compaction · 7bc94855
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR adds a special log for FileStreamSink for two purposes:
      
      - Versioning. A future Spark version should be able to read the metadata of an old FileStreamSink.
      - Compaction. As reading from many small files is usually pretty slow, we should compact small metadata files into big files.
      
      FileStreamSinkLog has a new log format instead of Java serialization format. It will write one log file for each batch. The first line of the log file is the version number, and there are multiple JSON lines following. Each JSON line is a JSON format of FileLog.
      
      FileStreamSinkLog will compact log files every "spark.sql.sink.file.log.compactLen" batches into a big file. When doing a compact, it will read all history logs and merge them with the new batch. During the compaction, it will also delete the files that are deleted (marked by FileLog.action). When the reader uses allLogs to list all files, this method only returns the visible files (drops the deleted files).
      
      ## How was this patch tested?
      
      FileStreamSinkLogSuite
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12435 from zsxwing/sink-log.
      7bc94855
    • Yanbo Liang's avatar
      [MINOR][ML][PYSPARK] Fix omissive params which should use TypeConverter · 296c384a
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      #11663 adds type conversion functionality for parameters in Pyspark. This PR find out the omissive ```Param``` that did not pass corresponding ```TypeConverter``` argument and fix them. After this PR, all params in pyspark/ml/ used ```TypeConverter```.
      
      ## How was this patch tested?
      Existing tests.
      
      cc jkbradley sethah
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12529 from yanboliang/typeConverter.
      296c384a
    • Andrew Or's avatar
      [SPARK-14720][SPARK-13643] Move Hive-specific methods into HiveSessionState... · 8fc267ab
      Andrew Or authored
      [SPARK-14720][SPARK-13643] Move Hive-specific methods into HiveSessionState and Create a SparkSession class
      
      ## What changes were proposed in this pull request?
      This PR has two main changes.
      1. Move Hive-specific methods from HiveContext to HiveSessionState, which help the work of removing HiveContext.
      2. Create a SparkSession Class, which will later be the entry point of Spark SQL users.
      
      ## How was this patch tested?
      Existing tests
      
      This PR is trying to fix test failures of https://github.com/apache/spark/pull/12485.
      
      Author: Andrew Or <andrew@databricks.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #12522 from yhuai/spark-session.
      8fc267ab
    • Tathagata Das's avatar
      [SPARK-14741][SQL] Fixed error in reading json file stream inside a partitioned directory · cb8ea9e1
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      Consider the following directory structure
      dir/col=X/some-files
      If we create a text format streaming dataframe on `dir/col=X/`  then it should not consider as partitioning in columns. Even though the streaming dataframe does not do so, the generated batch dataframes pick up col as a partitioning columns, causing mismatch streaming source schema and generated df schema. This leads to runtime failure:
      ```
      18:55:11.262 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: Query query-0 terminated with error
      java.lang.AssertionError: assertion failed: Invalid batch: c#2 != c#7,type#8
      ```
      The reason is that the partition inferring code has no idea of a base path, above which it should not search of partitions. This PR makes sure that the batch DF is generated with the basePath set as the original path on which the file stream source is defined.
      
      ## How was this patch tested?
      
      New unit test
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #12517 from tdas/SPARK-14741.
      cb8ea9e1
    • Joseph K. Bradley's avatar
      [SPARK-14478][ML][MLLIB][DOC] Doc that StandardScaler uses the corrected sample std · acc7e592
      Joseph K. Bradley authored
      ## What changes were proposed in this pull request?
      
      Currently, MLlib's StandardScaler scales columns using the corrected standard deviation (sqrt of unbiased variance). This matches what R's scale package does.
      
      This PR documents this fact.
      
      ## How was this patch tested?
      
      doc only
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #12519 from jkbradley/scaler-variance-doc.
      acc7e592
    • Yanbo Liang's avatar
      [MINOR][ML][PYSPARK] Fix omissive param setters which should use _set method · 08f84d7a
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      #11939 make Python param setters use the `_set` method. This PR fix omissive ones.
      
      ## How was this patch tested?
      Existing tests.
      
      cc jkbradley sethah
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #12531 from yanboliang/setters-omissive.
      08f84d7a
    • jerryshao's avatar
      [SPARK-14725][CORE] Remove HttpServer class · 90cbc82f
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      This proposal removes the class `HttpServer`, with the changing of internal file/jar/class transmission to RPC layer, currently there's no code using this `HttpServer`, so here propose to remove it.
      
      ## How was this patch tested?
      
      Unit test is verified locally.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #12526 from jerryshao/SPARK-14725.
      90cbc82f
    • Sean Owen's avatar
      [SPARK-14742][DOCS] Redirect spark-ec2 doc to new location · b4e76a9a
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Restore `ec2-scripts.md` as a redirect to amplab/spark-ec2 docs
      
      ## How was this patch tested?
      
      `jekyll build` and checked with the browser
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #12534 from srowen/SPARK-14742.
      b4e76a9a
    • Burak Yavuz's avatar
      [SPARK-14555] First cut of Python API for Structured Streaming · 80bf48f4
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      This patch provides a first cut of python APIs for structured streaming. This PR provides the new classes:
       - ContinuousQuery
       - Trigger
       - ProcessingTime
      in pyspark under `pyspark.sql.streaming`.
      
      In addition, it contains the new methods added under:
       -  `DataFrameWriter`
           a) `startStream`
           b) `trigger`
           c) `queryName`
      
       -  `DataFrameReader`
           a) `stream`
      
       - `DataFrame`
          a) `isStreaming`
      
      This PR doesn't contain all methods exposed for `ContinuousQuery`, for example:
       - `exception`
       - `sourceStatuses`
       - `sinkStatus`
      
      They may be added in a follow up.
      
      This PR also contains some very minor doc fixes in the Scala side.
      
      ## How was this patch tested?
      
      Python doc tests
      
      TODO:
       - [ ] verify Python docs look good
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      Author: Burak Yavuz <burak@databricks.com>
      
      Closes #12320 from brkyvz/stream-python.
      80bf48f4
    • Alex Bozarth's avatar
      [SPARK-8171][WEB UI] Javascript based infinite scrolling for the log page · 83427788
      Alex Bozarth authored
      Updated the log page by replacing the current pagination with a javascript-based infinite scroll solution
      
      Author: Alex Bozarth <ajbozart@us.ibm.com>
      
      Closes #10910 from ajbozarth/spark8171.
      83427788
    • Yuhao Yang's avatar
      [SPARK-14635][ML] Documentation and Examples for TF-IDF only refer to HashingTF · ed9d8038
      Yuhao Yang authored
      ## What changes were proposed in this pull request?
      
      Currently, the docs for TF-IDF only refer to using HashingTF with IDF. However, CountVectorizer can also be used. We should probably amend the user guide and examples to show this.
      
      ## How was this patch tested?
      
      unit tests and doc generation
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #12454 from hhbyyh/tfdoc.
      ed9d8038
    • Liwei Lin's avatar
      [SPARK-14687][CORE][SQL][MLLIB] Call path.getFileSystem(conf) instead of call FileSystem.get(conf) · 17db4bfe
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      - replaced `FileSystem.get(conf)` calls with `path.getFileSystem(conf)`
      
      ## How was this patch tested?
      
      N/A
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12450 from lw-lin/fix-fs-get.
      17db4bfe
    • Ryan Blue's avatar
      [SPARK-14679][UI] Fix UI DAG visualization OOM. · a3451119
      Ryan Blue authored
      ## What changes were proposed in this pull request?
      
      The DAG visualization can cause an OOM when generating the DOT file.
      This happens because clusters are not correctly deduped by a contains
      check because they use the default equals implementation. This adds a
      working equals implementation.
      
      ## How was this patch tested?
      
      This adds a test suite that checks the new equals implementation.
      
      Author: Ryan Blue <blue@apache.org>
      
      Closes #12437 from rdblue/SPARK-14679-fix-ui-oom.
      a3451119
    • Wenchen Fan's avatar
      [SPARK-9013][SQL] generate MutableProjection directly instead of return a function · 7abe9a65
      Wenchen Fan authored
      `MutableProjection` is not thread-safe and we won't use it in multiple threads. I think the reason that we return `() => MutableProjection` is not about thread safety, but to save the costs of generating code when we need same but individual mutable projections.
      
      However, I only found one place that use this [feature](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/Window.scala#L122-L123), and comparing to the troubles it brings, I think we should generate `MutableProjection` directly instead of return a function.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #7373 from cloud-fan/project.
      7abe9a65
    • Dongjoon Hyun's avatar
      [SPARK-14639] [PYTHON] [R] Add `bround` function in Python/R. · 14869ae6
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This issue aims to expose Scala `bround` function in Python/R API.
      `bround` function is implemented in SPARK-14614 by extending current `round` function.
      We used the following semantics from Hive.
      ```java
      public static double bround(double input, int scale) {
          if (Double.isNaN(input) || Double.isInfinite(input)) {
            return input;
          }
          return BigDecimal.valueOf(input).setScale(scale, RoundingMode.HALF_EVEN).doubleValue();
      }
      ```
      
      After this PR, `pyspark` and `sparkR` also support `bround` function.
      
      **PySpark**
      ```python
      >>> from pyspark.sql.functions import bround
      >>> sqlContext.createDataFrame([(2.5,)], ['a']).select(bround('a', 0).alias('r')).collect()
      [Row(r=2.0)]
      ```
      
      **SparkR**
      ```r
      > df = createDataFrame(sqlContext, data.frame(x = c(2.5, 3.5)))
      > head(collect(select(df, bround(df$x, 0))))
        bround(x, 0)
      1            2
      2            4
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins tests (including new testcases).
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12509 from dongjoon-hyun/SPARK-14639.
      14869ae6
  3. Apr 19, 2016
    • Dongjoon Hyun's avatar
      [MINOR] [SQL] Re-enable `explode()` and `json_tuple()` testcases in ExpressionToSQLSuite · 6f1ec1f2
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Since [SPARK-12719: SQL Generation supports for generators](https://issues.apache.org/jira/browse/SPARK-12719) was resolved, this PR enables the related testcases: `explode()` and `json_tuple()`.
      
      ## How was this patch tested?
      
      Pass the Jenkins tests (with re-enabled test cases).
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12329 from dongjoon-hyun/minor_enable_testcases.
      6f1ec1f2
    • Wenchen Fan's avatar
      [SPARK-14600] [SQL] Push predicates through Expand · 856bc465
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-14600
      
      This PR makes `Expand.output` have different attributes from the grouping attributes produced by the underlying `Project`, as they have different meaning, so that we can safely push down filter through `Expand`
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12496 from cloud-fan/expand.
      856bc465
    • Wenchen Fan's avatar
      [SPARK-14704][CORE] create accumulators in TaskMetrics · 85d759ca
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Before this PR, we create accumulators at driver side(and register them) and send them to executor side, then we create `TaskMetrics` with these accumulators at executor side.
      After this PR, we will create `TaskMetrics` at driver side and send it to executor side, so that we can create accumulators inside `TaskMetrics` directly, which is cleaner.
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12472 from cloud-fan/acc.
      85d759ca
    • Luciano Resende's avatar
      [SPARK-13419] [SQL] Update SubquerySuite to use checkAnswer for validation · 78b38109
      Luciano Resende authored
      ## What changes were proposed in this pull request?
      
      Change SubquerySuite to validate test results utilizing checkAnswer helper method
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Luciano Resende <lresende@apache.org>
      
      Closes #12269 from lresende/SPARK-13419.
      78b38109
    • Sun Rui's avatar
      [SPARK-13905][SPARKR] Change signature of as.data.frame() to be consistent with the R base package. · 8eedf0b5
      Sun Rui authored
      ## What changes were proposed in this pull request?
      
      Change the signature of as.data.frame() to be consistent with that in the R base package to meet R user's convention.
      
      ## How was this patch tested?
      dev/lint-r
      SparkR unit tests
      
      Author: Sun Rui <rui.sun@intel.com>
      
      Closes #11811 from sun-rui/SPARK-13905.
      8eedf0b5
    • Lianhui Wang's avatar
      [SPARK-14705][YARN] support Multiple FileSystem for YARN STAGING DIR · 4514aebd
      Lianhui Wang authored
      ## What changes were proposed in this pull request?
      In SPARK-13063, It makes the SPARK YARN STAGING DIR as configurable. But it only support default FileSystem. If there are many clusters, It can be different FileSystem for different cluster in our spark.
      
      ## How was this patch tested?
      I have tested it successfully with following commands:
      MASTER=yarn-client ./bin/spark-shell --conf spark.yarn.stagingDir=hdfs:namenode2/temp
      $SPARK_HOME/bin/spark-submit --conf spark.yarn.stagingDir=hdfs:namenode2/temp
      
      cc tgravescs vanzin andrewor14
      
      Author: Lianhui Wang <lianhuiwang09@gmail.com>
      
      Closes #12473 from lianhuiwang/SPARK-14705.
      4514aebd
    • Joan's avatar
      [SPARK-13929] Use Scala reflection for UDTs · 3ae25f24
      Joan authored
      ## What changes were proposed in this pull request?
      
      Enable ScalaReflection and User Defined Types for plain Scala classes.
      
      This involves the move of `schemaFor` from `ScalaReflection` trait (which is Runtime and Compile time (macros) reflection) to the `ScalaReflection` object (runtime reflection only) as I believe this code wouldn't work at compile time anyway as it manipulates `Class`'s that are not compiled yet.
      
      ## How was this patch tested?
      
      Unit test
      
      Author: Joan <joan@goyeau.com>
      
      Closes #12149 from joan38/SPARK-13929-Scala-reflection.
      3ae25f24
Loading