Skip to content
Snippets Groups Projects
  1. May 23, 2015
    • Davies Liu's avatar
      Fix install jira-python · a4df0f2d
      Davies Liu authored
      jira-pytyhon package should be installed by
      
        sudo pip install jira
      
      cc pwendell
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6367 from davies/fix_jira_python2 and squashes the following commits:
      
      fbb3c8e [Davies Liu] Fix install jira-python
      a4df0f2d
    • Davies Liu's avatar
      [SPARK-7840] add insertInto() to Writer · be47af1b
      Davies Liu authored
      Add tests later.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6375 from davies/insertInto and squashes the following commits:
      
      826423e [Davies Liu] add insertInto() to Writer
      be47af1b
    • Davies Liu's avatar
      [SPARK-7322, SPARK-7836, SPARK-7822][SQL] DataFrame window function related updates · efe3bfdf
      Davies Liu authored
      1. ntile should take an integer as parameter.
      2. Added Python API (based on #6364)
      3. Update documentation of various DataFrame Python functions.
      
      Author: Davies Liu <davies@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6374 from rxin/window-final and squashes the following commits:
      
      69004c7 [Reynold Xin] Style fix.
      288cea9 [Reynold Xin] Update documentaiton.
      7cb8985 [Reynold Xin] Merge pull request #6364 from davies/window
      66092b4 [Davies Liu] update docs
      ed73cb4 [Reynold Xin] [SPARK-7322][SQL] Improve DataFrame window function documentation.
      ef55132 [Davies Liu] Merge branch 'master' of github.com:apache/spark into window4
      8936ade [Davies Liu] fix maxint in python 3
      2649358 [Davies Liu] update docs
      778e2c0 [Davies Liu] SPARK-7836 and SPARK-7822: Python API of window functions
      efe3bfdf
    • zsxwing's avatar
      [SPARK-7777][Streaming] Handle the case when there is no block in a batch · ad0badba
      zsxwing authored
      In the old implementation, if a batch has no block, `areWALRecordHandlesPresent` will be `true` and it will return `WriteAheadLogBackedBlockRDD`.
      
      This PR handles this case by returning `WriteAheadLogBackedBlockRDD` or `BlockRDD` according to the configuration.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #6372 from zsxwing/SPARK-7777 and squashes the following commits:
      
      788f895 [zsxwing] Handle the case when there is no block in a batch
      ad0badba
    • Shivaram Venkataraman's avatar
      [SPARK-6811] Copy SparkR lib in make-distribution.sh · a40bca01
      Shivaram Venkataraman authored
      This change also remove native libraries from SparkR to make sure our distribution works across platforms
      
      Tested by building on Mac, running on Amazon Linux (CentOS), Windows VM and vice-versa (built on Linux run on Mac)
      
      I will also test this with YARN soon and update this PR.
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #6373 from shivaram/sparkr-binary and squashes the following commits:
      
      ae41b5c [Shivaram Venkataraman] Remove native libraries from SparkR Also include the built SparkR package in make-distribution.sh
      a40bca01
    • Davies Liu's avatar
      [SPARK-6806] [SPARKR] [DOCS] Fill in SparkR examples in programming guide · 7af3818c
      Davies Liu authored
      sqlCtx -> sqlContext
      
      You can check the docs by:
      
      ```
      $ cd docs
      $ SKIP_SCALADOC=1 jekyll serve
      ```
      cc shivaram
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #5442 from davies/r_docs and squashes the following commits:
      
      7a12ec6 [Davies Liu] remove rdd in R docs
      8496b26 [Davies Liu] remove the docs related to RDD
      e23b9d6 [Davies Liu] delete R docs for RDD API
      222e4ff [Davies Liu] Merge branch 'master' into r_docs
      89684ce [Davies Liu] Merge branch 'r_docs' of github.com:davies/spark into r_docs
      f0a10e1 [Davies Liu] address comments from @shivaram
      f61de71 [Davies Liu] Update pairRDD.R
      3ef7cf3 [Davies Liu] use + instead of function(a,b) a+b
      2f10a77 [Davies Liu] address comments from @cafreeman
      9c2a062 [Davies Liu] mention R api together with Python API
      23f751a [Davies Liu] Fill in SparkR examples in programming guide
      7af3818c
    • GenTang's avatar
      [SPARK-5090] [EXAMPLES] The improvement of python converter for hbase · 4583cf4b
      GenTang authored
      Hi,
      
      Following the discussion in http://apache-spark-developers-list.1001551.n3.nabble.com/python-converter-in-HBaseConverter-scala-spark-examples-td10001.html. I made some modification in three files in package examples:
      1. HBaseConverters.scala: the new converter will converts all the records in an hbase results into a single string
      2. hbase_input.py: as the value string may contain several records, we can use ast package to convert the string into dict
      3. HBaseTest.scala: as the package examples use hbase 0.98.7 the original constructor HTableDescriptor is deprecated. The updation to new constructor is made
      
      Author: GenTang <gen.tang86@gmail.com>
      
      Closes #3920 from GenTang/master and squashes the following commits:
      
      d2153df [GenTang] import JSONObject precisely
      4802481 [GenTang] dump the result into a singl String
      62df7f0 [GenTang] remove the comment
      21de653 [GenTang] return the string in json format
      15b1fe3 [GenTang] the modification of comments
      5cbbcfc [GenTang] the improvement of pythonconverter
      ceb31c5 [GenTang] the modification for adapting updation of hbase
      3253b61 [GenTang] the modification accompanying the improvement of pythonconverter
      4583cf4b
    • Hari Shreedharan's avatar
      [HOTFIX] Add tests for SparkListenerApplicationStart with Driver Logs. · 368b8c2b
      Hari Shreedharan authored
      #6166 added the driver logs to `SparkListenerApplicationStart`. This  adds tests in `JsonProtocolSuite` to ensure we don't regress.
      
      Author: Hari Shreedharan <hshreedharan@apache.org>
      
      Closes #6368 from harishreedharan/jsonprotocol-test and squashes the following commits:
      
      dc9eafc [Hari Shreedharan] [HOTFIX] Add tests for SparkListenerApplicationStart with Driver Logs.
      368b8c2b
    • Tathagata Das's avatar
      [SPARK-7838] [STREAMING] Set scope for kinesis stream · baa89838
      Tathagata Das authored
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #6369 from tdas/SPARK-7838 and squashes the following commits:
      
      87d1c7f [Tathagata Das] Addressed comment
      37775d8 [Tathagata Das] set scope for kinesis stream
      baa89838
    • Shivaram Venkataraman's avatar
      [MINOR] Add SparkR to create-release script · 017b3404
      Shivaram Venkataraman authored
      Enables the SparkR profiles for all the binary builds we create
      
      cc pwendell
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #6371 from shivaram/sparkr-create-release and squashes the following commits:
      
      ca5a0b2 [Shivaram Venkataraman] Add -Psparkr to create-release.sh
      017b3404
    • Akshat Aranya's avatar
      [SPARK-7795] [CORE] Speed up task scheduling in standalone mode by reusing serializer · a1635741
      Akshat Aranya authored
      My experiments with scheduling very short tasks in standalone cluster mode indicated that a significant amount of time was being spent in scheduling the tasks (>500ms for 256 tasks).  I found that most of the time was being spent in creating a new instance of serializer for each task.  Changing this to just one serializer brought down the scheduling time to 8ms.
      
      Author: Akshat Aranya <aaranya@quantcast.com>
      
      Closes #6323 from coolfrood/master and squashes the following commits:
      
      12d8c9e [Akshat Aranya] Reduce visibility of serializer
      bd4a5dd [Akshat Aranya] Style fix
      0b8ca93 [Akshat Aranya] Incorporate review comments
      fe530cd [Akshat Aranya] Speed up task scheduling in standalone mode by reusing serializer instead of creating a new one for each task.
      a1635741
  2. May 22, 2015
    • Mike Dusenberry's avatar
      [SPARK-7830] [DOCS] [MLLIB] Adding logistic regression to the list of... · 63a5ce75
      Mike Dusenberry authored
      [SPARK-7830] [DOCS] [MLLIB] Adding logistic regression to the list of Multiclass Classification Supported Methods documentation
      
      Added logistic regression to the list of Multiclass Classification Supported Methods in the MLlib Classification and Regression documentation, as it was missing.
      
      Author: Mike Dusenberry <dusenberrymw@gmail.com>
      
      Closes #6357 from dusenberrymw/Add_LR_To_List_Of_Multiclass_Classification_Methods and squashes the following commits:
      
      7918650 [Mike Dusenberry] Updating broken link due to the "Binary Classification" section on the Linear Methods page being renamed to "Classification".
      3005dc2 [Mike Dusenberry] Adding logistic regression to the list of Multiclass Classification Supported Methods in the MLlib Classification and Regression documentation, as it was missing.
      63a5ce75
    • Burak Yavuz's avatar
      [SPARK-7224] [SPARK-7306] mock repository generator for --packages tests without nio.Path · 8014e1f6
      Burak Yavuz authored
      The previous PR for SPARK-7224 (#5790) broke JDK 6, because it used java.nio.Path, which was in jdk 7, and not in 6. This PR uses Guava's `Files` to handle directory creation, and etc...
      
      The description from the previous PR:
      > This patch contains an `IvyTestUtils` file, which dynamically generates jars and pom files to test the `--packages` feature without having to rely on the internet, and Maven Central.
      
      cc pwendell
      
      I also rand the flaky test about 20 times locally, it didn't fail a single time, but I think it may fail like once every 100 builds? I still haven't figured the cause yet, but the test before it, `--jars` was also failing after we turned off the `--packages` test in `SparkSubmitSuite`. It may be related to the launch of SparkSubmit.
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #5892 from brkyvz/maven-utils and squashes the following commits:
      
      e9b1903 [Burak Yavuz] fix merge conflict
      68214e0 [Burak Yavuz] remove ignore for test(neglect spark dependencies)
      e632381 [Burak Yavuz] fix ignore
      9ef1408 [Burak Yavuz] re-enable --packages test
      22eea62 [Burak Yavuz] Merge branch 'master' of github.com:apache/spark into maven-utils
      05cd0de [Burak Yavuz] added mock repository generator
      8014e1f6
    • Tathagata Das's avatar
      [SPARK-7788] Made KinesisReceiver.onStart() non-blocking · 1c388a99
      Tathagata Das authored
      KinesisReceiver calls worker.run() which is a blocking call (while loop) as per source code of kinesis-client library - https://github.com/awslabs/amazon-kinesis-client/blob/v1.2.1/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/Worker.java.
      This results in infinite loop while calling sparkStreamingContext.stop(stopSparkContext = false, stopGracefully = true) perhaps because ReceiverTracker is never able to register the receiver (it's receiverInfo field is a empty map) causing it to be stuck in infinite loop while waiting for running flag to be set to false.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #6348 from tdas/SPARK-7788 and squashes the following commits:
      
      2584683 [Tathagata Das] Added receiver id in thread name
      6cf1cd4 [Tathagata Das] Made KinesisReceiver.onStart non-blocking
      1c388a99
    • Andrew Or's avatar
      [SPARK-7771] [SPARK-7779] Dynamic allocation: lower default timeouts further · 3d8760d7
      Andrew Or authored
      The default add time of 5s is still too slow for small jobs. Also, the current default remove time of 10 minutes seem rather high. This patch lowers both and rephrases a few log messages.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #6301 from andrewor14/da-minor and squashes the following commits:
      
      6d614a6 [Andrew Or] Lower log level
      2811492 [Andrew Or] Log information when requests are canceled
      5fcd3eb [Andrew Or] Fix tests
      3320710 [Andrew Or] Lower timeouts + rephrase a few log messages
      3d8760d7
    • Michael Armbrust's avatar
      [SPARK-7834] [SQL] Better window error messages · 3c130510
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #6363 from marmbrus/windowErrors and squashes the following commits:
      
      516b02d [Michael Armbrust] [SPARK-7834] [SQL] Better window error messages
      3c130510
    • Imran Rashid's avatar
      [SPARK-7760] add /json back into master & worker pages; add test · 821254fb
      Imran Rashid authored
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #6284 from squito/SPARK-7760 and squashes the following commits:
      
      5e02d8a [Imran Rashid] style; increase timeout
      9987399 [Imran Rashid] comment
      8c7ed63 [Imran Rashid] add /json back into master & worker pages; add test
      821254fb
    • Liang-Chi Hsieh's avatar
      [SPARK-7270] [SQL] Consider dynamic partition when inserting into hive table · 126d7235
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-7270
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #5864 from viirya/dyn_partition_insert and squashes the following commits:
      
      b5627df [Liang-Chi Hsieh] For comments.
      3b21e4b [Liang-Chi Hsieh] Merge remote-tracking branch 'upstream/master' into dyn_partition_insert
      8a4352d [Liang-Chi Hsieh] Consider dynamic partition when inserting into hive table.
      126d7235
    • Santiago M. Mola's avatar
      [SPARK-7724] [SQL] Support Intersect/Except in Catalyst DSL. · e4aef91f
      Santiago M. Mola authored
      Author: Santiago M. Mola <santi@mola.io>
      
      Closes #6327 from smola/feature/catalyst-dsl-set-ops and squashes the following commits:
      
      11db778 [Santiago M. Mola] [SPARK-7724] [SQL] Support Intersect/Except in Catalyst DSL.
      e4aef91f
    • WangTaoTheTonic's avatar
      [SPARK-7758] [SQL] Override more configs to avoid failure when connect to a postgre sql · 31d5d463
      WangTaoTheTonic authored
      https://issues.apache.org/jira/browse/SPARK-7758
      
      When initializing `executionHive`, we only masks
      `javax.jdo.option.ConnectionURL` to override metastore location.  However,
      other properties that relates to the actual Hive metastore data source are not
      masked.  For example, when using Spark SQL with a PostgreSQL backed Hive
      metastore, `executionHive` actually tries to use settings read from
      `hive-site.xml`, which talks about PostgreSQL, to connect to the temporary
      Derby metastore, thus causes error.
      
      To fix this, we need to mask all metastore data source properties.
      Specifically, according to the code of [Hive `ObjectStore.getDataSourceProps()`
      method] [1], all properties whose name mentions "jdo" and "datanucleus" must be
      included.
      
      [1]: https://github.com/apache/hive/blob/release-0.13.1/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L288
      
      Have tested using postgre sql as metastore, it worked fine.
      
      Author: WangTaoTheTonic <wangtao111@huawei.com>
      
      Closes #6314 from WangTaoTheTonic/SPARK-7758 and squashes the following commits:
      
      ca7ae7c [WangTaoTheTonic] add comments
      86caf2c [WangTaoTheTonic] delete unused import
      e4f0feb [WangTaoTheTonic] block more data source related property
      92a81fa [WangTaoTheTonic] fix style check
      e3e683d [WangTaoTheTonic] override more configs to avoid failuer connecting to postgre sql
      31d5d463
    • Josh Rosen's avatar
      [SPARK-7766] KryoSerializerInstance reuse is unsafe when auto-reset is disabled · eac00691
      Josh Rosen authored
      SPARK-3386 / #5606 modified the shuffle write path to re-use serializer instances across multiple calls to DiskBlockObjectWriter. It turns out that this introduced a very rare bug when using `KryoSerializer`: if auto-reset is disabled and reference-tracking is enabled, then we'll end up re-using the same serializer instance to write multiple output streams without calling `reset()` between write calls, which can lead to cases where objects in one file may contain references to objects that are in previous files, causing errors during deserialization.
      
      This patch fixes this bug by calling `reset()` at the start of `serialize()` and `serializeStream()`. I also added a regression test which demonstrates that this problem only occurs when auto-reset is disabled and reference-tracking is enabled.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #6293 from JoshRosen/kryo-instance-reuse-bug and squashes the following commits:
      
      e19726d [Josh Rosen] Add fix for SPARK-7766.
      71845e3 [Josh Rosen] Add failing regression test to trigger Kryo re-use bug
      eac00691
    • Ram Sriharsha's avatar
      [SPARK-7574] [ML] [DOC] User guide for OneVsRest · 509d55ab
      Ram Sriharsha authored
      Including Iris Dataset (after shuffling and relabeling 3 -> 0 to confirm to 0 -> numClasses-1 labeling). Could not find an existing dataset in data/mllib for multiclass classification.
      
      Author: Ram Sriharsha <rsriharsha@hw11853.local>
      
      Closes #6296 from harsha2010/SPARK-7574 and squashes the following commits:
      
      645427c [Ram Sriharsha] cleanup
      46c41b1 [Ram Sriharsha] cleanup
      2f76295 [Ram Sriharsha] Code Review Fixes
      ebdf103 [Ram Sriharsha] Java Example
      c026613 [Ram Sriharsha] Code Review fixes
      4b7d1a6 [Ram Sriharsha] minor cleanup
      13bed9c [Ram Sriharsha] add wikipedia link
      bb9dbfa [Ram Sriharsha] Clean up naming
      6f90db1 [Ram Sriharsha] [SPARK-7574][ml][doc] User guide for OneVsRest
      509d55ab
    • Patrick Wendell's avatar
      Revert "[BUILD] Always run SQL tests in master build." · c63036cd
      Patrick Wendell authored
      This reverts commit 147b6be3.
      c63036cd
    • Ram Sriharsha's avatar
      [SPARK-7404] [ML] Add RegressionEvaluator to spark.ml · f490b3b4
      Ram Sriharsha authored
      Author: Ram Sriharsha <rsriharsha@hw11853.local>
      
      Closes #6344 from harsha2010/SPARK-7404 and squashes the following commits:
      
      16b9d77 [Ram Sriharsha] consistent naming
      7f100b6 [Ram Sriharsha] cleanup
      c46044d [Ram Sriharsha] Merge with Master + Code Review Fixes
      188fa0a [Ram Sriharsha] Merge branch 'master' into SPARK-7404
      f5b6a4c [Ram Sriharsha] cleanup doc
      97beca5 [Ram Sriharsha] update test to use R packages
      32dd310 [Ram Sriharsha] fix indentation
      f93b812 [Ram Sriharsha] fix test
      1b6ebb3 [Ram Sriharsha] [SPARK-7404][ml] Add RegressionEvaluator to spark.ml
      f490b3b4
    • Michael Armbrust's avatar
      [SPARK-6743] [SQL] Fix empty projections of cached data · 3b68cb04
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #6165 from marmbrus/wrongColumn and squashes the following commits:
      
      4fad158 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into wrongColumn
      aad7eab [Michael Armbrust] rxins comments
      f1e8df1 [Michael Armbrust] [SPARK-6743][SQL] Fix empty projections of cached data
      3b68cb04
    • Cheng Lian's avatar
      [MINOR] [SQL] Ignores Thrift server UISeleniumSuite · 4e5220c3
      Cheng Lian authored
      This Selenium test case has been flaky for a while and led to frequent Jenkins build failure. Let's disable it temporarily until we figure out a proper solution.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6345 from liancheng/ignore-selenium-test and squashes the following commits:
      
      09996fe [Cheng Lian] Ignores Thrift server UISeleniumSuite
      4e5220c3
    • Cheng Hao's avatar
      [SPARK-7322][SQL] Window functions in DataFrame · f6f2eeb1
      Cheng Hao authored
      This closes #6104.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6343 from rxin/window-df and squashes the following commits:
      
      026d587 [Reynold Xin] Address code review feedback.
      dc448fe [Reynold Xin] Fixed Hive tests.
      9794d9d [Reynold Xin] Moved Java test package.
      9331605 [Reynold Xin] Refactored API.
      3313e2a [Reynold Xin] Merge pull request #6104 from chenghao-intel/df_window
      d625a64 [Cheng Hao] Update the dataframe window API as suggsted
      c141fb1 [Cheng Hao] hide all of properties of the WindowFunctionDefinition
      3b1865f [Cheng Hao] scaladoc typos
      f3fd2d0 [Cheng Hao] polish the unit test
      6847825 [Cheng Hao] Add additional analystcs functions
      57e3bc0 [Cheng Hao] typos
      24a08ec [Cheng Hao] scaladoc
      28222ed [Cheng Hao] fix bug of range/row Frame
      1d91865 [Cheng Hao] style issue
      53f89f2 [Cheng Hao] remove the over from the functions.scala
      964c013 [Cheng Hao] add more unit tests and window functions
      64e18a7 [Cheng Hao] Add Window Function support for DataFrame
      f6f2eeb1
    • Joseph K. Bradley's avatar
      [SPARK-7578] [ML] [DOC] User guide for spark.ml Normalizer, IDF, StandardScaler · 2728c3df
      Joseph K. Bradley authored
      Added user guide sections with code examples.
      Also added small Java unit tests to test Java example in guide.
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #6127 from jkbradley/feature-guide-2 and squashes the following commits:
      
      cd47f4b [Joseph K. Bradley] Updated based on code review
      f16bcec [Joseph K. Bradley] Fixed merge issues and update Python examples print calls for Python 3
      0a862f9 [Joseph K. Bradley] Added Normalizer, StandardScaler to ml-features doc, plus small Java unit tests
      a21c2d6 [Joseph K. Bradley] Updated ml-features.md with IDF
      2728c3df
    • Xiangrui Meng's avatar
      [SPARK-7535] [.0] [MLLIB] Audit the pipeline APIs for 1.4 · 8f11c611
      Xiangrui Meng authored
      Some changes to the pipeilne APIs:
      
      1. Estimator/Transformer/ doesn’t need to extend Params since PipelineStage already does.
      1. Move Evaluator to ml.evaluation.
      1. Mention larger metric values are better.
      1. PipelineModel doc. “compiled” -> “fitted”
      1. Hide object PolynomialExpansion.
      1. Hide object VectorAssembler.
      1. Word2Vec.minCount (and other) -> group param
      1. ParamValidators -> DeveloperApi
      1. Hide MetadataUtils/SchemaUtils.
      
      jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6322 from mengxr/SPARK-7535.0 and squashes the following commits:
      
      9e9c7da [Xiangrui Meng] move JavaEvaluator to ml.evaluation as well
      e179480 [Xiangrui Meng] move Evaluation to ml.evaluation in PySpark
      08ef61f [Xiangrui Meng] update pipieline APIs
      8f11c611
  3. May 21, 2015
    • Mike Dusenberry's avatar
      [DOCS] [MLLIB] Fixing broken link in MLlib Linear Methods documentation. · e4136ea6
      Mike Dusenberry authored
      Just a small change: fixed a broken link in the MLlib Linear Methods documentation by removing a newline character between the link title and link address.
      
      Author: Mike Dusenberry <dusenberrymw@gmail.com>
      
      Closes #6340 from dusenberrymw/Fix_MLlib_Linear_Methods_link and squashes the following commits:
      
      0a57818 [Mike Dusenberry] Fixing broken link in MLlib Linear Methods documentation.
      e4136ea6
    • Hari Shreedharan's avatar
      [SPARK-7657] [YARN] Add driver logs links in application UI, in cluster mode. · 956c4c91
      Hari Shreedharan authored
      This PR adds the URLs to the driver logs to `SparkListenerApplicationStarted` event, which is later used by the `ExecutorsListener` to populate the URLs to the driver logs in its own state. This info is then used when the UI is rendered to display links to the logs.
      
      Author: Hari Shreedharan <hshreedharan@apache.org>
      
      Closes #6166 from harishreedharan/am-log-link and squashes the following commits:
      
      943fc4f [Hari Shreedharan] Merge remote-tracking branch 'asf/master' into am-log-link
      9e5c04b [Hari Shreedharan] Merge remote-tracking branch 'asf/master' into am-log-link
      b3f9b9d [Hari Shreedharan] Updated comment based on feedback.
      0840a95 [Hari Shreedharan] Move the result and sc.stop back to original location, minor import changes.
      537a2f7 [Hari Shreedharan] Add test to ensure the log urls are populated and valid.
      4033725 [Hari Shreedharan] Adding comments explaining how node reports are used to get the log urls.
      6c5c285 [Hari Shreedharan] Import order.
      346f4ea [Hari Shreedharan] Review feedback fixes.
      629c1dc [Hari Shreedharan] Cleanup.
      99fb1a3 [Hari Shreedharan] Send the log urls in App start event, to ensure that other listeners are not affected.
      c0de336 [Hari Shreedharan] Ensure new unit test cleans up after itself.
      50cdae3 [Hari Shreedharan] Added unit test, made the approach generic.
      402e8e4 [Hari Shreedharan] Use `NodeReport` to get the URL for the logs. Also, make the environment variables generic so other cluster managers can use them as well.
      1cf338f [Hari Shreedharan] [SPARK-7657][YARN] Add driver link in application UI, in cluster mode.
      956c4c91
    • Xiangrui Meng's avatar
      [SPARK-7219] [MLLIB] Output feature attributes in HashingTF · 85b96372
      Xiangrui Meng authored
      This PR updates `HashingTF` to output ML attributes that tell the number of features in the output column. We need to expand `UnaryTransformer` to support output metadata. A `df outputMetadata: Metadata` is not sufficient because the metadata may also depends on the input data. Though this is not true for `HashingTF`, I think it is reasonable to update `UnaryTransformer` in a separate PR. `checkParams` is added to verify common requirements for params. I will send a separate PR to use it in other test suites. jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6308 from mengxr/SPARK-7219 and squashes the following commits:
      
      9bd2922 [Xiangrui Meng] address comments
      e82a68a [Xiangrui Meng] remove sqlContext from test suite
      995535b [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into SPARK-7219
      2194703 [Xiangrui Meng] add test for attributes
      178ae23 [Xiangrui Meng] update HashingTF with tests
      91a6106 [Xiangrui Meng] WIP
      85b96372
    • Xiangrui Meng's avatar
      [SPARK-7794] [MLLIB] update RegexTokenizer default settings · f5db4b41
      Xiangrui Meng authored
      The previous default is `{gaps: false, pattern: "\\p{L}+|[^\\p{L}\\s]+"}`. The default pattern is hard to understand. This PR changes the default to `{gaps: true, pattern: "\\s+"}`. jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6330 from mengxr/SPARK-7794 and squashes the following commits:
      
      5ee7cde [Xiangrui Meng] update RegexTokenizer default settings
      f5db4b41
    • Davies Liu's avatar
      [SPARK-7783] [SQL] [PySpark] add DataFrame.rollup/cube in Python · 17791a58
      Davies Liu authored
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6311 from davies/rollup and squashes the following commits:
      
      0261db1 [Davies Liu] use @since
      a51ca6b [Davies Liu] Merge branch 'master' of github.com:apache/spark into rollup
      8ad5af4 [Davies Liu] Update dataframe.py
      ade3841 [Davies Liu] add DataFrame.rollup/cube in Python
      17791a58
    • Tathagata Das's avatar
      [SPARK-7776] [STREAMING] Added shutdown hook to StreamingContext · d68ea24d
      Tathagata Das authored
      Shutdown hook to stop SparkContext was added recently. This results in ugly errors when a streaming application is terminated by ctrl-C.
      
      ```
      Exception in thread "Thread-27" org.apache.spark.SparkException: Job cancelled because SparkContext was shut down
      	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:736)
      	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:735)
      	at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
      	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:735)
      	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1468)
      	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
      	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1403)
      	at org.apache.spark.SparkContext.stop(SparkContext.scala:1642)
      	at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:559)
      	at org.apache.spark.util.SparkShutdownHook.run(Utils.scala:2266)
      	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Utils.scala:2236)
      	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(Utils.scala:2236)
      	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(Utils.scala:2236)
      	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1764)
      	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(Utils.scala:2236)
      	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(Utils.scala:2236)
      	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(Utils.scala:2236)
      	at scala.util.Try$.apply(Try.scala:161)
      	at org.apache.spark.util.SparkShutdownHookManager.runAll(Utils.scala:2236)
      	at org.apache.spark.util.SparkShutdownHookManager$$anon$6.run(Utils.scala:2218)
      	at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
      ```
      
      This is because the Spark's shutdown hook stops the context, and the streaming jobs fail in the middle. The correct solution is to stop the streaming context before the spark context. This PR adds the shutdown hook to do so with a priority higher than the SparkContext's shutdown hooks priority.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #6307 from tdas/SPARK-7776 and squashes the following commits:
      
      e3d5475 [Tathagata Das] Added conf to specify graceful shutdown
      4c18652 [Tathagata Das] Added shutdown hook to StreamingContxt.
      d68ea24d
    • Yin Huai's avatar
      [SPARK-7737] [SQL] Use leaf dirs having data files to discover partitions. · 347b5010
      Yin Huai authored
      https://issues.apache.org/jira/browse/SPARK-7737
      
      cc liancheng
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6329 from yhuai/spark-7737 and squashes the following commits:
      
      7e0dfc7 [Yin Huai] Use leaf dirs having data files to discover partitions.
      347b5010
    • Yin Huai's avatar
      [BUILD] Always run SQL tests in master build. · 147b6be3
      Yin Huai authored
      Seems our master build does not run HiveCompatibilitySuite (because _RUN_SQL_TESTS is not set). This PR introduces a property `AMP_JENKINS_PRB` to differentiate a PR build and a regular build. If a build is a regular one, we always set _RUN_SQL_TESTS to true.
      
      cc JoshRosen nchammas
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #5955 from yhuai/runSQLTests and squashes the following commits:
      
      3d399bc [Yin Huai] Always run SQL tests in master build.
      147b6be3
    • Liang-Chi Hsieh's avatar
      [SPARK-7800] isDefined should not marked too early in putNewKey · 5a3c04bb
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-7800
      
      `isDefined` is marked as true twice in `Location.putNewKey`. The first one is unnecessary and will cause problem because it is too early and before some assert checking. E.g., if an attempt with incorrect `keyLengthBytes` marks `isDefined` as true, the location can not be used later.
      
      ping JoshRosen
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #6324 from viirya/dup_isdefined and squashes the following commits:
      
      cbfe03b [Liang-Chi Hsieh] isDefined should not marked too early in putNewKey.
      5a3c04bb
    • Andrew Or's avatar
      [SPARK-7718] [SQL] Speed up partitioning by avoiding closure cleaning · 5287eec5
      Andrew Or authored
      According to yhuai we spent 6-7 seconds cleaning closures in a partitioning job that takes 12 seconds. Since we provide these closures in Spark we know for sure they are serializable, so we can bypass the cleaning.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #6256 from andrewor14/sql-partition-speed-up and squashes the following commits:
      
      a82b451 [Andrew Or] Fix style
      10f7e3e [Andrew Or] Avoid getting call sites and cleaning closures
      17e2943 [Andrew Or] Merge branch 'master' of github.com:apache/spark into sql-partition-speed-up
      523f042 [Andrew Or] Skip unnecessary Utils.getCallSites too
      f7fe143 [Andrew Or] Avoid unnecessary closure cleaning
      5287eec5
    • Holden Karau's avatar
      [SPARK-7711] Add a startTime property to match the corresponding one in Scala · 6b18cdc1
      Holden Karau authored
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #6275 from holdenk/SPARK-771-startTime-is-missing-from-pyspark and squashes the following commits:
      
      06662dc [Holden Karau] add mising blank line for style checks
      7a87410 [Holden Karau] add back missing newline
      7a7876b [Holden Karau] Add a startTime property to match the corresponding one in the Scala SparkContext
      6b18cdc1
Loading