Skip to content
Snippets Groups Projects
  1. Sep 12, 2017
    • sarutak's avatar
      [SPARK-21368][SQL] TPCDSQueryBenchmark can't refer query files. · b9b54b1c
      sarutak authored
      ## What changes were proposed in this pull request?
      
      TPCDSQueryBenchmark packaged into a jar doesn't work with spark-submit.
      It's because of the failure of reference query files in the jar file.
      
      ## How was this patch tested?
      
      Ran the benchmark.
      
      Author: sarutak <sarutak@oss.nttdata.co.jp>
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #18592 from sarutak/fix-tpcds-benchmark.
      b9b54b1c
    • Ajay Saini's avatar
      [SPARK-21027][ML][PYTHON] Added tunable parallelism to one vs. rest in both Scala mllib and Pyspark · 720c94fe
      Ajay Saini authored
      # What changes were proposed in this pull request?
      
      Added tunable parallelism to the pyspark implementation of one vs. rest classification. Added a parallelism parameter to the Scala implementation of one vs. rest along with functionality for using the parameter to tune the level of parallelism.
      
      I take this PR #18281 over because the original author is busy but we need merge this PR soon.
      After this been merged, we can close #18281 .
      
      ## How was this patch tested?
      
      Test suite added.
      
      Author: Ajay Saini <ajays725@gmail.com>
      Author: WeichenXu <weichen.xu@databricks.com>
      
      Closes #19110 from WeichenXu123/spark-21027.
      720c94fe
    • Zhenhua Wang's avatar
      [SPARK-17642][SQL] support DESC EXTENDED/FORMATTED table column commands · 515910e9
      Zhenhua Wang authored
      ## What changes were proposed in this pull request?
      
      Support DESC (EXTENDED | FORMATTED) ? TABLE COLUMN command.
      Support DESC EXTENDED | FORMATTED TABLE COLUMN command to show column-level statistics.
      Do NOT support describe nested columns.
      
      ## How was this patch tested?
      
      Added test cases.
      
      Author: Zhenhua Wang <wzh_zju@163.com>
      Author: Zhenhua Wang <wangzhenhua@huawei.com>
      Author: wangzhenhua <wangzhenhua@huawei.com>
      
      Closes #16422 from wzhfy/descColumn.
      515910e9
    • Kousuke Saruta's avatar
      [DOCS] Fix unreachable links in the document · 95755823
      Kousuke Saruta authored
      ## What changes were proposed in this pull request?
      
      Recently, I found two unreachable links in the document and fixed them.
      Because of small changes related to the document, I don't file this issue in JIRA but please suggest I should do it if you think it's needed.
      
      ## How was this patch tested?
      
      Tested manually.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #19195 from sarutak/fix-unreachable-link.
      95755823
    • Jen-Ming Chung's avatar
      [SPARK-21610][SQL][FOLLOWUP] Corrupt records are not handled properly when... · 7d0a3ef4
      Jen-Ming Chung authored
      [SPARK-21610][SQL][FOLLOWUP] Corrupt records are not handled properly when creating a dataframe from a file
      
      ## What changes were proposed in this pull request?
      
      When the `requiredSchema` only contains `_corrupt_record`, the derived `actualSchema` is empty and the `_corrupt_record` are all null for all rows. This PR captures above situation and raise an exception with a reasonable workaround messag so that users can know what happened and how to fix the query.
      
      ## How was this patch tested?
      
      Added unit test in `CSVSuite`.
      
      Author: Jen-Ming Chung <jenmingisme@gmail.com>
      
      Closes #19199 from jmchung/SPARK-21610-FOLLOWUP.
      7d0a3ef4
    • Marco Gaido's avatar
      [SPARK-14516][ML] Adding ClusteringEvaluator with the implementation of Cosine... · dd781675
      Marco Gaido authored
      [SPARK-14516][ML] Adding ClusteringEvaluator with the implementation of Cosine silhouette and squared Euclidean silhouette.
      
      ## What changes were proposed in this pull request?
      
      This PR adds the ClusteringEvaluator Evaluator which contains two metrics:
       - **cosineSilhouette**: the Silhouette measure using the cosine distance;
       - **squaredSilhouette**: the Silhouette measure using the squared Euclidean distance.
      
      The implementation of the two metrics refers to the algorithm proposed and explained [here](https://drive.google.com/file/d/0B0Hyo%5f%5fbG%5f3fdkNvSVNYX2E3ZU0/view). These algorithms have been thought for a distributed and parallel environment, thus they have reasonable performance, unlike a naive Silhouette implementation following its definition.
      
      ## How was this patch tested?
      
      The patch has been tested with the additional unit tests added (comparing the results with the ones provided by [Python sklearn library](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html)).
      
      Author: Marco Gaido <mgaido@hortonworks.com>
      
      Closes #18538 from mgaido91/SPARK-14516.
      dd781675
    • FavioVazquez's avatar
      [SPARK-21976][DOC] Fix wrong documentation for Mean Absolute Error. · e2ac2f1c
      FavioVazquez authored
      ## What changes were proposed in this pull request?
      
      Fixed wrong documentation for Mean Absolute Error.
      
      Even though the code is correct for the MAE:
      
      ```scala
      Since("1.2.0")
        def meanAbsoluteError: Double = {
          summary.normL1(1) / summary.count
        }
      ```
      In the documentation the division by N is missing.
      
      ## How was this patch tested?
      
      All of spark tests were run.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: FavioVazquez <favio.vazquezp@gmail.com>
      Author: faviovazquez <favio.vazquezp@gmail.com>
      Author: Favio André Vázquez <favio.vazquezp@gmail.com>
      
      Closes #19190 from FavioVazquez/mae-fix.
      e2ac2f1c
  2. Sep 11, 2017
    • caoxuewen's avatar
      [MINOR][SQL] remove unuse import class · dc74c0e6
      caoxuewen authored
      ## What changes were proposed in this pull request?
      
      this PR describe remove the import class that are unused.
      
      ## How was this patch tested?
      
      N/A
      
      Author: caoxuewen <cao.xuewen@zte.com.cn>
      
      Closes #19131 from heary-cao/unuse_import.
      dc74c0e6
    • Chunsheng Ji's avatar
      [SPARK-21856] Add probability and rawPrediction to MLPC for Python · 4bab8f59
      Chunsheng Ji authored
      Probability and rawPrediction has been added to MultilayerPerceptronClassifier for Python
      
      Add unit test.
      
      Author: Chunsheng Ji <chunsheng.ji@gmail.com>
      
      Closes #19172 from chunshengji/SPARK-21856.
      4bab8f59
  3. Sep 10, 2017
    • Felix Cheung's avatar
      [BUILD][TEST][SPARKR] add sparksubmitsuite to appveyor tests · 828fab03
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      more file regex
      
      ## How was this patch tested?
      
      Jenkins, AppVeyor
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #19177 from felixcheung/rmoduletotest.
      828fab03
    • Jen-Ming Chung's avatar
      [SPARK-21610][SQL] Corrupt records are not handled properly when creating a dataframe from a file · 6273a711
      Jen-Ming Chung authored
      ## What changes were proposed in this pull request?
      ```
      echo '{"field": 1}
      {"field": 2}
      {"field": "3"}' >/tmp/sample.json
      ```
      
      ```scala
      import org.apache.spark.sql.types._
      
      val schema = new StructType()
        .add("field", ByteType)
        .add("_corrupt_record", StringType)
      
      val file = "/tmp/sample.json"
      
      val dfFromFile = spark.read.schema(schema).json(file)
      
      scala> dfFromFile.show(false)
      +-----+---------------+
      |field|_corrupt_record|
      +-----+---------------+
      |1    |null           |
      |2    |null           |
      |null |{"field": "3"} |
      +-----+---------------+
      
      scala> dfFromFile.filter($"_corrupt_record".isNotNull).count()
      res1: Long = 0
      
      scala> dfFromFile.filter($"_corrupt_record".isNull).count()
      res2: Long = 3
      ```
      When the `requiredSchema` only contains `_corrupt_record`, the derived `actualSchema` is empty and the `_corrupt_record` are all null for all rows. This PR captures above situation and raise an exception with a reasonable workaround messag so that users can know what happened and how to fix the query.
      
      ## How was this patch tested?
      
      Added test case.
      
      Author: Jen-Ming Chung <jenmingisme@gmail.com>
      
      Closes #18865 from jmchung/SPARK-21610.
      6273a711
    • Peter Szalai's avatar
      [SPARK-20098][PYSPARK] dataType's typeName fix · 520d92a1
      Peter Szalai authored
      ## What changes were proposed in this pull request?
      `typeName`  classmethod has been fixed by using type -> typeName map.
      
      ## How was this patch tested?
      local build
      
      Author: Peter Szalai <szalaipeti.vagyok@gmail.com>
      
      Closes #17435 from szalai1/datatype-gettype-fix.
      520d92a1
  4. Sep 09, 2017
    • Jane Wang's avatar
      [SPARK-4131] Support "Writing data into the filesystem from queries" · f7679055
      Jane Wang authored
      ## What changes were proposed in this pull request?
      
      This PR implements the sql feature:
      INSERT OVERWRITE [LOCAL] DIRECTORY directory1
        [ROW FORMAT row_format] [STORED AS file_format]
        SELECT ... FROM ...
      
      ## How was this patch tested?
      Added new unittests and also pulled the code to fb-spark so that we could test writing to hdfs directory.
      
      Author: Jane Wang <janewang@fb.com>
      
      Closes #18975 from janewangfb/port_local_directory.
      f7679055
    • Yanbo Liang's avatar
      [MINOR][SQL] Correct DataFrame doc. · e4d8f9a3
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Correct DataFrame doc.
      
      ## How was this patch tested?
      Only doc change, no tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #19173 from yanboliang/df-doc.
      e4d8f9a3
    • Liang-Chi Hsieh's avatar
      [SPARK-21954][SQL] JacksonUtils should verify MapType's value type instead of key type · 6b45d7e9
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      `JacksonUtils.verifySchema` verifies if a data type can be converted to JSON. For `MapType`, it now verifies the key type. However, in `JacksonGenerator`, when converting a map to JSON, we only care about its values and create a writer for the values. The keys in a map are treated as strings by calling `toString` on the keys.
      
      Thus, we should change `JacksonUtils.verifySchema` to verify the value type of `MapType`.
      
      ## How was this patch tested?
      
      Added tests.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #19167 from viirya/test-jacksonutils.
      6b45d7e9
    • Andrew Ash's avatar
      [SPARK-21941] Stop storing unused attemptId in SQLTaskMetrics · 8a5eb506
      Andrew Ash authored
      ## What changes were proposed in this pull request?
      
      In a driver heap dump containing 390,105 instances of SQLTaskMetrics this
      would have saved me approximately 3.2MB of memory.
      
      Since we're not getting any benefit from storing this unused value, let's
      eliminate it until a future PR makes use of it.
      
      ## How was this patch tested?
      
      Existing unit tests
      
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #19153 from ash211/aash/trim-sql-listener.
      8a5eb506
  5. Sep 08, 2017
    • Xin Ren's avatar
      [SPARK-19866][ML][PYSPARK] Add local version of Word2Vec findSynonyms for spark.ml: Python API · 31c74fec
      Xin Ren authored
      https://issues.apache.org/jira/browse/SPARK-19866
      
      ## What changes were proposed in this pull request?
      
      Add Python API for findSynonymsArray matching Scala API.
      
      ## How was this patch tested?
      
      Manual test
      `./python/run-tests --python-executables=python2.7 --modules=pyspark-ml`
      
      Author: Xin Ren <iamshrek@126.com>
      Author: Xin Ren <renxin.ubc@gmail.com>
      Author: Xin Ren <keypointt@users.noreply.github.com>
      
      Closes #17451 from keypointt/SPARK-19866.
      31c74fec
    • hyukjinkwon's avatar
      [SPARK-15243][ML][SQL][PYTHON] Add missing support for unicode in Param... · 8598d03a
      hyukjinkwon authored
      [SPARK-15243][ML][SQL][PYTHON] Add missing support for unicode in Param methods & functions in dataframe
      
      ## What changes were proposed in this pull request?
      
      This PR proposes to support unicodes in Param methods in ML, other missed functions in DataFrame.
      
      For example, this causes a `ValueError` in Python 2.x when param is a unicode string:
      
      ```python
      >>> from pyspark.ml.classification import LogisticRegression
      >>> lr = LogisticRegression()
      >>> lr.hasParam("threshold")
      True
      >>> lr.hasParam(u"threshold")
      Traceback (most recent call last):
       ...
          raise TypeError("hasParam(): paramName must be a string")
      TypeError: hasParam(): paramName must be a string
      ```
      
      This PR is based on https://github.com/apache/spark/pull/13036
      
      ## How was this patch tested?
      
      Unit tests in `python/pyspark/ml/tests.py` and `python/pyspark/sql/tests.py`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #17096 from HyukjinKwon/SPARK-15243.
      8598d03a
    • Kazuaki Ishizaki's avatar
      [SPARK-21946][TEST] fix flaky test: "alter table: rename cached table" in InMemoryCatalogedDDLSuite · 8a4f228d
      Kazuaki Ishizaki authored
      ## What changes were proposed in this pull request?
      
      This PR fixes flaky test `InMemoryCatalogedDDLSuite "alter table: rename cached table"`.
      Since this test validates distributed DataFrame, the result should be checked by using `checkAnswer`. The original version used `df.collect().Seq` method that does not guaranty an order of each element of the result.
      
      ## How was this patch tested?
      
      Use existing test case
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #19159 from kiszk/SPARK-21946.
      8a4f228d
    • Liang-Chi Hsieh's avatar
      [SPARK-21726][SQL][FOLLOW-UP] Check for structural integrity of the plan in Optimzer in test mode · 0dfc1ec5
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      The condition in `Optimizer.isPlanIntegral` is wrong. We should always return `true` if not in test mode.
      
      ## How was this patch tested?
      
      Manually test.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #19161 from viirya/SPARK-21726-followup.
      0dfc1ec5
    • Wenchen Fan's avatar
      [SPARK-21936][SQL] backward compatibility test framework for HiveExternalCatalog · dbb82412
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      `HiveExternalCatalog` is a semi-public interface. When creating tables, `HiveExternalCatalog` converts the table metadata to hive table format and save into hive metastore. It's very import to guarantee backward compatibility here, i.e., tables created by previous Spark versions should still be readable in newer Spark versions.
      
      Previously we find backward compatibility issues manually, which is really easy to miss bugs. This PR introduces a test framework to automatically test `HiveExternalCatalog` backward compatibility, by downloading Spark binaries with different versions, and create tables with these Spark versions, and read these tables with current Spark version.
      
      ## How was this patch tested?
      
      test-only change
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #19148 from cloud-fan/test.
      dbb82412
    • Liang-Chi Hsieh's avatar
      [SPARK-21726][SQL] Check for structural integrity of the plan in Optimzer in test mode. · 6e37524a
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      We have many optimization rules now in `Optimzer`. Right now we don't have any checks in the optimizer to check for the structural integrity of the plan (e.g. resolved). When debugging, it is difficult to identify which rules return invalid plans.
      
      It would be great if in test mode, we can check whether a plan is still resolved after the execution of each rule, so we can catch rules that return invalid plans.
      
      ## How was this patch tested?
      
      Added tests.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #18956 from viirya/SPARK-21726.
      6e37524a
    • liuxian's avatar
      [SPARK-21949][TEST] Tables created in unit tests should be dropped after use · f62b20f3
      liuxian authored
      ## What changes were proposed in this pull request?
       Tables should be dropped after use in unit tests.
      ## How was this patch tested?
      N/A
      
      Author: liuxian <liu.xian3@zte.com.cn>
      
      Closes #19155 from 10110346/droptable.
      f62b20f3
    • Takuya UESHIN's avatar
      [SPARK-21950][SQL][PYTHON][TEST] pyspark.sql.tests.SQLTests2 should stop SparkContext. · 57bc1e9e
      Takuya UESHIN authored
      ## What changes were proposed in this pull request?
      
      `pyspark.sql.tests.SQLTests2` doesn't stop newly created spark context in the test and it might affect the following tests.
      This pr makes `pyspark.sql.tests.SQLTests2` stop `SparkContext`.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Takuya UESHIN <ueshin@databricks.com>
      
      Closes #19158 from ueshin/issues/SPARK-21950.
      57bc1e9e
  6. Sep 07, 2017
    • Dongjoon Hyun's avatar
      [SPARK-21939][TEST] Use TimeLimits instead of Timeouts · c26976fe
      Dongjoon Hyun authored
      Since ScalaTest 3.0.0, `org.scalatest.concurrent.Timeouts` is deprecated.
      This PR replaces the deprecated one with `org.scalatest.concurrent.TimeLimits`.
      
      ```scala
      -import org.scalatest.concurrent.Timeouts._
      +import org.scalatest.concurrent.TimeLimits._
      ```
      
      Pass the existing test suites.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #19150 from dongjoon-hyun/SPARK-21939.
      
      Change-Id: I1a1b07f1b97e51e2263dfb34b7eaaa099b2ded5e
      c26976fe
    • Dongjoon Hyun's avatar
      [SPARK-13656][SQL] Delete spark.sql.parquet.cacheMetadata from SQLConf and docs · e00f1a1d
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Since [SPARK-15639](https://github.com/apache/spark/pull/13701), `spark.sql.parquet.cacheMetadata` and `PARQUET_CACHE_METADATA` is not used. This PR removes from SQLConf and docs.
      
      ## How was this patch tested?
      
      Pass the existing Jenkins.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #19129 from dongjoon-hyun/SPARK-13656.
      e00f1a1d
    • Sanket Chintapalli's avatar
      [SPARK-21890] Credentials not being passed to add the tokens · b9ab791a
      Sanket Chintapalli authored
      I observed this while running a oozie job trying to connect to hbase via spark.
      It look like the creds are not being passed in thehttps://github.com/apache/spark/blob/branch-2.2/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala#L53 for 2.2 release.
      More Info as to why it fails on secure grid:
      Oozie client gets the necessary tokens the application needs before launching. It passes those tokens along to the oozie launcher job (MR job) which will then actually call the Spark client to launch the spark app and pass the tokens along.
      The oozie launcher job cannot get anymore tokens because all it has is tokens ( you can't get tokens with tokens, you need tgt or keytab).
      The error here is because the launcher job runs the Spark Client to submit the spark job but the spark client doesn't see that it already has the hdfs tokens so it tries to get more, which ends with the exception.
      There was a change with SPARK-19021 to generalize the hdfs credentials provider that changed it so we don't pass the existing credentials into the call to get tokens so it doesn't realize it already has the necessary tokens.
      
      https://issues.apache.org/jira/browse/SPARK-21890
      Modified to pass creds to get delegation tokens
      
      Author: Sanket Chintapalli <schintap@yahoo-inc.com>
      
      Closes #19140 from redsanket/SPARK-21890-master.
      b9ab791a
    • Dongjoon Hyun's avatar
      [SPARK-21912][SQL] ORC/Parquet table should not create invalid column names · eea2b877
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Currently, users meet job abortions while creating or altering ORC/Parquet tables with invalid column names. We had better prevent this by raising **AnalysisException** with a guide to use aliases instead like Paquet data source tables.
      
      **BEFORE**
      ```scala
      scala> sql("CREATE TABLE orc1 USING ORC AS SELECT 1 `a b`")
      17/09/04 13:28:21 ERROR Utils: Aborting task
      java.lang.IllegalArgumentException: Error: : expected at the position 8 of 'struct<a b:int>' but ' ' is found.
      17/09/04 13:28:21 ERROR FileFormatWriter: Job job_20170904132821_0001 aborted.
      17/09/04 13:28:21 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
      org.apache.spark.SparkException: Task failed while writing rows.
      ```
      
      **AFTER**
      ```scala
      scala> sql("CREATE TABLE orc1 USING ORC AS SELECT 1 `a b`")
      17/09/04 13:27:40 ERROR CreateDataSourceTableAsSelectCommand: Failed to write to table orc1
      org.apache.spark.sql.AnalysisException: Attribute name "a b" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.;
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins with a new test case.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #19124 from dongjoon-hyun/SPARK-21912.
      eea2b877
    • Liang-Chi Hsieh's avatar
      [SPARK-21835][SQL][FOLLOW-UP] RewritePredicateSubquery should not produce unresolved query plans · ce7293c1
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      This is a follow-up of #19050 to deal with `ExistenceJoin` case.
      
      ## How was this patch tested?
      
      Added test.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #19151 from viirya/SPARK-21835-followup.
      ce7293c1
  7. Sep 06, 2017
    • Tucker Beck's avatar
      Fixed pandoc dependency issue in python/setup.py · aad21254
      Tucker Beck authored
      ## Problem Description
      
      When pyspark is listed as a dependency of another package, installing
      the other package will cause an install failure in pyspark. When the
      other package is being installed, pyspark's setup_requires requirements
      are installed including pypandoc. Thus, the exception handling on
      setup.py:152 does not work because the pypandoc module is indeed
      available. However, the pypandoc.convert() function fails if pandoc
      itself is not installed (in our use cases it is not). This raises an
      OSError that is not handled, and setup fails.
      
      The following is a sample failure:
      ```
      $ which pandoc
      $ pip freeze | grep pypandoc
      pypandoc==1.4
      $ pip install pyspark
      Collecting pyspark
        Downloading pyspark-2.2.0.post0.tar.gz (188.3MB)
          100% |████████████████████████████████| 188.3MB 16.8MB/s
          Complete output from command python setup.py egg_info:
          Maybe try:
      
              sudo apt-get install pandoc
          See http://johnmacfarlane.net/pandoc/installing.html
          for installation options
          ---------------------------------------------------------------
      
          Traceback (most recent call last):
            File "<string>", line 1, in <module>
            File "/tmp/pip-build-mfnizcwa/pyspark/setup.py", line 151, in <module>
              long_description = pypandoc.convert('README.md', 'rst')
            File "/home/tbeck/.virtualenvs/cem/lib/python3.5/site-packages/pypandoc/__init__.py", line 69, in convert
              outputfile=outputfile, filters=filters)
            File "/home/tbeck/.virtualenvs/cem/lib/python3.5/site-packages/pypandoc/__init__.py", line 260, in _convert_input
              _ensure_pandoc_path()
            File "/home/tbeck/.virtualenvs/cem/lib/python3.5/site-packages/pypandoc/__init__.py", line 544, in _ensure_pandoc_path
              raise OSError("No pandoc was found: either install pandoc and add it\n"
          OSError: No pandoc was found: either install pandoc and add it
          to your PATH or or call pypandoc.download_pandoc(...) or
          install pypandoc wheels with included pandoc.
      
          ----------------------------------------
      Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-mfnizcwa/pyspark/
      ```
      
      ## What changes were proposed in this pull request?
      
      This change simply adds an additional exception handler for the OSError
      that is raised. This allows pyspark to be installed client-side without requiring pandoc to be installed.
      
      ## How was this patch tested?
      
      I tested this by building a wheel package of pyspark with the change applied. Then, in a clean virtual environment with pypandoc installed but pandoc not available on the system, I installed pyspark from the wheel.
      
      Here is the output
      
      ```
      $ pip freeze | grep pypandoc
      pypandoc==1.4
      $ which pandoc
      $ pip install --no-cache-dir ../spark/python/dist/pyspark-2.3.0.dev0-py2.py3-none-any.whl
      Processing /home/tbeck/work/spark/python/dist/pyspark-2.3.0.dev0-py2.py3-none-any.whl
      Requirement already satisfied: py4j==0.10.6 in /home/tbeck/.virtualenvs/cem/lib/python3.5/site-packages (from pyspark==2.3.0.dev0)
      Installing collected packages: pyspark
      Successfully installed pyspark-2.3.0.dev0
      ```
      
      Author: Tucker Beck <tucker.beck@rentrakmail.com>
      
      Closes #18981 from dusktreader/dusktreader/fix-pandoc-dependency-issue-in-setup_py.
      aad21254
    • Jacek Laskowski's avatar
      [SPARK-21901][SS] Define toString for StateOperatorProgress · fa0092bd
      Jacek Laskowski authored
      ## What changes were proposed in this pull request?
      
      Just `StateOperatorProgress.toString` + few formatting fixes
      
      ## How was this patch tested?
      
      Local build. Waiting for OK from Jenkins.
      
      Author: Jacek Laskowski <jacek@japila.pl>
      
      Closes #19112 from jaceklaskowski/SPARK-21901-StateOperatorProgress-toString.
      fa0092bd
    • Jose Torres's avatar
      [SPARK-21765] Check that optimization doesn't affect isStreaming bit. · acdf45fb
      Jose Torres authored
      ## What changes were proposed in this pull request?
      
      Add an assert in logical plan optimization that the isStreaming bit stays the same, and fix empty relation rules where that wasn't happening.
      
      ## How was this patch tested?
      
      new and existing unit tests
      
      Author: Jose Torres <joseph.torres@databricks.com>
      Author: Jose Torres <joseph-torres@databricks.com>
      
      Closes #19056 from joseph-torres/SPARK-21765-followup.
      acdf45fb
    • Felix Cheung's avatar
      [SPARK-21801][SPARKR][TEST] set random seed for predictable test · 36b48ee6
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      set.seed() before running tests
      
      ## How was this patch tested?
      
      jenkins, appveyor
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #19111 from felixcheung/rranseed.
      36b48ee6
    • Liang-Chi Hsieh's avatar
      [SPARK-21835][SQL] RewritePredicateSubquery should not produce unresolved query plans · f2e22aeb
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Correlated predicate subqueries are rewritten into `Join` by the rule `RewritePredicateSubquery`  during optimization.
      
      It is possibly that the two sides of the `Join` have conflicting attributes. The query plans produced by `RewritePredicateSubquery` become unresolved and break structural integrity.
      
      We should check if there are conflicting attributes in the `Join` and de-duplicate them by adding a `Project`.
      
      ## How was this patch tested?
      
      Added tests.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #19050 from viirya/SPARK-21835.
      f2e22aeb
    • hyukjinkwon's avatar
      [SPARK-21903][BUILD][FOLLOWUP] Upgrade scalastyle-maven-plugin and scalastyle... · 64936c14
      hyukjinkwon authored
      [SPARK-21903][BUILD][FOLLOWUP] Upgrade scalastyle-maven-plugin and scalastyle as well in POM and SparkBuild.scala
      
      ## What changes were proposed in this pull request?
      
      This PR proposes to match scalastyle version in POM and SparkBuild.scala
      
      ## How was this patch tested?
      
      Manual builds.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #19146 from HyukjinKwon/SPARK-21903-follow-up.
      64936c14
    • Bryan Cutler's avatar
      [SPARK-19357][ML] Adding parallel model evaluation in ML tuning · 16c4c03c
      Bryan Cutler authored
      ## What changes were proposed in this pull request?
      Modified `CrossValidator` and `TrainValidationSplit` to be able to evaluate models in parallel for a given parameter grid.  The level of parallelism is controlled by a parameter `numParallelEval` used to schedule a number of models to be trained/evaluated so that the jobs can be run concurrently.  This is a naive approach that does not check the cluster for needed resources, so care must be taken by the user to tune the parameter appropriately.  The default value is `1` which will train/evaluate in serial.
      
      ## How was this patch tested?
      Added unit tests for CrossValidator and TrainValidationSplit to verify that model selection is the same when run in serial vs parallel.  Manual testing to verify tasks run in parallel when param is > 1. Added parameter usage to relevant examples.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #16774 from BryanCutler/parallel-model-eval-SPARK-19357.
      16c4c03c
    • Riccardo Corbella's avatar
      [SPARK-21924][DOCS] Update structured streaming programming guide doc · 4ee7dfe4
      Riccardo Corbella authored
      ## What changes were proposed in this pull request?
      
      Update the line "For example, the data (12:09, cat) is out of order and late, and it falls in windows 12:05 - 12:15 and 12:10 - 12:20." as follow "For example, the data (12:09, cat) is out of order and late, and it falls in windows 12:00 - 12:10 and 12:05 - 12:15." under the programming structured streaming programming guide.
      
      Author: Riccardo Corbella <r.corbella@reply.it>
      
      Closes #19137 from riccardocorbella/bugfix.
      4ee7dfe4
  8. Sep 05, 2017
    • jerryshao's avatar
      [SPARK-9104][CORE] Expose Netty memory metrics in Spark · 445f1790
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      This PR exposes Netty memory usage for Spark's `TransportClientFactory` and `TransportServer`, including the details of each direct arena and heap arena metrics, as well as aggregated metrics. The purpose of adding the Netty metrics is to better know the memory usage of Netty in Spark shuffle, rpc and others network communications, and guide us to better configure the memory size of executors.
      
      This PR doesn't expose these metrics to any sink, to leverage this feature, still requires to connect to either MetricsSystem or collect them back to Driver to display.
      
      ## How was this patch tested?
      
      Add Unit test to verify it, also manually verified in real cluster.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #18935 from jerryshao/SPARK-9104.
      445f1790
    • jerryshao's avatar
      [SPARK-18061][THRIFTSERVER] Add spnego auth support for ThriftServer thrift/http protocol · 6a232544
      jerryshao authored
      Spark ThriftServer doesn't support spnego auth for thrift/http protocol, this mainly used for knox+thriftserver scenario. Since in HiveServer2 CLIService there already has existing codes to support it. So here copy it to Spark ThriftServer to make it support.
      
      Related Hive JIRA HIVE-6697.
      
      Manual verification.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #18628 from jerryshao/SPARK-21407.
      
      Change-Id: I61ef0c09f6972bba982475084a6b0ae3a74e385e
      6a232544
    • Dongjoon Hyun's avatar
      [MINOR][DOC] Update `Partition Discovery` section to enumerate all available file sources · 9e451bcf
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      All built-in data sources support `Partition Discovery`. We had better update the document to give the users more benefit clearly.
      
      **AFTER**
      
      <img width="906" alt="1" src="https://user-images.githubusercontent.com/9700541/30083628-14278908-9244-11e7-98dc-9ad45fe233a9.png">
      
      ## How was this patch tested?
      
      ```
      SKIP_API=1 jekyll serve --watch
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #19139 from dongjoon-hyun/partitiondiscovery.
      9e451bcf
Loading