Skip to content
Snippets Groups Projects
  1. May 29, 2015
    • Michael Nazario's avatar
      [SPARK-7899] [PYSPARK] Fix Python 3 pyspark/sql/types module conflict · 1c5b1982
      Michael Nazario authored
      This PR makes the types module in `pyspark/sql/types` work with pylint static analysis by removing the dynamic naming of the `pyspark/sql/_types` module to `pyspark/sql/types`.
      
      Tests are now loaded using `$PYSPARK_DRIVER_PYTHON -m module` rather than `$PYSPARK_DRIVER_PYTHON module.py`. The old method adds the location of `module.py` to `sys.path`, so this change prevents accidental use of relative paths in Python.
      
      Author: Michael Nazario <mnazario@palantir.com>
      
      Closes #6439 from mnazario/feature/SPARK-7899 and squashes the following commits:
      
      366ef30 [Michael Nazario] Remove hack on random.py
      bb8b04d [Michael Nazario] Make doctests consistent with other tests
      6ee4f75 [Michael Nazario] Change test scripts to use "-m"
      673528f [Michael Nazario] Move _types back to types
      1c5b1982
    • Shivaram Venkataraman's avatar
      [SPARK-6806] [SPARKR] [DOCS] Add a new SparkR programming guide · 5f48e5c3
      Shivaram Venkataraman authored
      This PR adds a new SparkR programming guide at the top-level. This will be useful for R users as our APIs don't directly match the Scala/Python APIs and as we need to explain SparkR without using RDDs as examples etc.
      
      cc rxin davies pwendell
      
      cc cafreeman -- Would be great if you could also take a look at this !
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #6490 from shivaram/sparkr-guide and squashes the following commits:
      
      d5ff360 [Shivaram Venkataraman] Add a section on HiveContext, HQL queries
      408dce5 [Shivaram Venkataraman] Fix link
      dbb86e3 [Shivaram Venkataraman] Fix minor typo
      9aff5e0 [Shivaram Venkataraman] Address comments, use dplyr-like syntax in example
      d09703c [Shivaram Venkataraman] Fix default argument in read.df
      ea816a1 [Shivaram Venkataraman] Add a new SparkR programming guide Also update write.df, read.df to handle defaults better
      5f48e5c3
    • Andrew Or's avatar
      [SPARK-7558] Demarcate tests in unit-tests.log · 9eb222c1
      Andrew Or authored
      Right now `unit-tests.log` are not of much value because we can't tell where the test boundaries are easily. This patch adds log statements before and after each test to outline the test boundaries, e.g.:
      
      ```
      ===== TEST OUTPUT FOR o.a.s.serializer.KryoSerializerSuite: 'kryo with parallelize for primitive arrays' =====
      
      15/05/27 12:36:39.596 pool-1-thread-1-ScalaTest-running-KryoSerializerSuite INFO SparkContext: Starting job: count at KryoSerializerSuite.scala:230
      15/05/27 12:36:39.596 dag-scheduler-event-loop INFO DAGScheduler: Got job 3 (count at KryoSerializerSuite.scala:230) with 4 output partitions (allowLocal=false)
      15/05/27 12:36:39.596 dag-scheduler-event-loop INFO DAGScheduler: Final stage: ResultStage 3(count at KryoSerializerSuite.scala:230)
      15/05/27 12:36:39.596 dag-scheduler-event-loop INFO DAGScheduler: Parents of final stage: List()
      15/05/27 12:36:39.597 dag-scheduler-event-loop INFO DAGScheduler: Missing parents: List()
      15/05/27 12:36:39.597 dag-scheduler-event-loop INFO DAGScheduler: Submitting ResultStage 3 (ParallelCollectionRDD[5] at parallelize at KryoSerializerSuite.scala:230), which has no missing parents
      
      ...
      
      15/05/27 12:36:39.624 pool-1-thread-1-ScalaTest-running-KryoSerializerSuite INFO DAGScheduler: Job 3 finished: count at KryoSerializerSuite.scala:230, took 0.028563 s
      15/05/27 12:36:39.625 pool-1-thread-1-ScalaTest-running-KryoSerializerSuite INFO KryoSerializerSuite:
      
      ***** FINISHED o.a.s.serializer.KryoSerializerSuite: 'kryo with parallelize for primitive arrays' *****
      
      ...
      ```
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #6441 from andrewor14/demarcate-tests and squashes the following commits:
      
      879b060 [Andrew Or] Fix compile after rebase
      d622af7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into demarcate-tests
      017c8ba [Andrew Or] Merge branch 'master' of github.com:apache/spark into demarcate-tests
      7790b6c [Andrew Or] Fix tests after logical merge conflict
      c7460c0 [Andrew Or] Merge branch 'master' of github.com:apache/spark into demarcate-tests
      c43ffc4 [Andrew Or] Fix tests?
      8882581 [Andrew Or] Fix tests
      ee22cda [Andrew Or] Fix log message
      fa9450e [Andrew Or] Merge branch 'master' of github.com:apache/spark into demarcate-tests
      12d1e1b [Andrew Or] Various whitespace changes (minor)
      69cbb24 [Andrew Or] Make all test suites extend SparkFunSuite instead of FunSuite
      bbce12e [Andrew Or] Fix manual things that cannot be covered through automation
      da0b12f [Andrew Or] Add core tests as dependencies in all modules
      f7d29ce [Andrew Or] Introduce base abstract class for all test suites
      9eb222c1
    • Reynold Xin's avatar
      [SPARK-7940] Enforce whitespace checking for DO, TRY, CATCH, FINALLY, MATCH,... · 94f62a49
      Reynold Xin authored
      [SPARK-7940] Enforce whitespace checking for DO, TRY, CATCH, FINALLY, MATCH, LARROW, RARROW in style checker.
      
      …
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6491 from rxin/more-whitespace and squashes the following commits:
      
      f6e63dc [Reynold Xin] [SPARK-7940] Enforce whitespace checking for DO, TRY, CATCH, FINALLY, MATCH, LARROW, RARROW in style checker.
      94f62a49
    • MechCoder's avatar
      [SPARK-7946] [MLLIB] DecayFactor wrongly set in StreamingKMeans · 6181937f
      MechCoder authored
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #6497 from MechCoder/spark-7946 and squashes the following commits:
      
      2fdd0a3 [MechCoder] Add non-regression test
      8c988c6 [MechCoder] [SPARK-7946] DecayFactor wrongly set in StreamingKMeans
      6181937f
    • Cheng Lian's avatar
      [SQL] [TEST] [MINOR] Uses a temporary log4j.properties in... · 4782e130
      Cheng Lian authored
      [SQL] [TEST] [MINOR] Uses a temporary log4j.properties in HiveThriftServer2Test to ensure expected logging behavior
      
      The `HiveThriftServer2Test` relies on proper logging behavior to assert whether the Thrift server daemon process is started successfully. However, some other jar files listed in the classpath may potentially contain an unexpected Log4J configuration file which overrides the logging behavior.
      
      This PR writes a temporary `log4j.properties` and prepend it to driver classpath before starting the testing Thrift server process to ensure proper logging behavior.
      
      cc andrewor14 yhuai
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6493 from liancheng/override-log4j and squashes the following commits:
      
      c489e0e [Cheng Lian] Fixes minor Scala styling issue
      b46ef0d [Cheng Lian] Uses a temporary log4j.properties in HiveThriftServer2Test to ensure expected logging behavior
      4782e130
    • Cheng Lian's avatar
      [SPARK-7950] [SQL] Sets spark.sql.hive.version in HiveThriftServer2.startWithContext() · e7b61775
      Cheng Lian authored
      When starting `HiveThriftServer2` via `startWithContext`, property `spark.sql.hive.version` isn't set. This causes Simba ODBC driver 1.0.8.1006 behaves differently and fails simple queries.
      
      Hive2 JDBC driver works fine in this case. Also, when starting the server with `start-thriftserver.sh`, both Hive2 JDBC driver and Simba ODBC driver works fine.
      
      Please refer to [SPARK-7950] [1] for details.
      
      [1]: https://issues.apache.org/jira/browse/SPARK-7950
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #6500 from liancheng/odbc-bugfix and squashes the following commits:
      
      051e3a3 [Cheng Lian] Fixes import order
      3a97376 [Cheng Lian] Sets spark.sql.hive.version in HiveThriftServer2.startWithContext()
      e7b61775
    • WangTaoTheTonic's avatar
      [SPARK-7524] [SPARK-7846] add configs for keytab and principal, pass these two... · a51b133d
      WangTaoTheTonic authored
      [SPARK-7524] [SPARK-7846] add configs for keytab and principal, pass these two configs with different way in different modes
      
      * As spark now supports long running service by updating tokens for namenode, but only accept parameters passed with "--k=v" format which is not very convinient. This patch add spark.* configs in properties file and system property.
      
      *  --principal and --keytabl options are passed to client but when we started thrift server or spark-shell these two are also passed into the Main class (org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 and org.apache.spark.repl.Main).
      In these two main class, arguments passed in will be processed with some 3rd libraries, which will lead to some error: "Invalid option: --principal" or "Unrecgnised option: --principal".
      We should pass these command args in different forms, say system properties.
      
      Author: WangTaoTheTonic <wangtao111@huawei.com>
      
      Closes #6051 from WangTaoTheTonic/SPARK-7524 and squashes the following commits:
      
      e65699a [WangTaoTheTonic] change logic to loadEnvironments
      ebd9ea0 [WangTaoTheTonic] merge master
      ecfe43a [WangTaoTheTonic] pass keytab and principal seperately in different mode
      33a7f40 [WangTaoTheTonic] expand the use of the current configs
      08bb4e8 [WangTaoTheTonic] fix wrong cite
      73afa64 [WangTaoTheTonic] add configs for keytab and principal, move originals to internal
      a51b133d
    • zsxwing's avatar
      [SPARK-7863] [CORE] Create SimpleDateFormat for every SimpleDateParam instance... · 8db40f67
      zsxwing authored
      [SPARK-7863] [CORE] Create SimpleDateFormat for every SimpleDateParam instance because it's not thread-safe
      
      SimpleDateFormat is not thread-safe. This PR creates new `SimpleDateFormat` for each `SimpleDateParam` instance.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #6406 from zsxwing/SPARK-7863 and squashes the following commits:
      
      aeed4c1 [zsxwing] Rewrite SimpleDateParam
      8cdd986 [zsxwing] Inline formats
      9680a15 [zsxwing] Create SimpleDateFormat for each SimpleDateParam instance because it's not thread-safe
      8db40f67
    • Tim Ellison's avatar
      [SPARK-7756] [CORE] Use testing cipher suites common to Oracle and IBM security providers · bf465807
      Tim Ellison authored
      Add alias names for supported cipher suites to the sample SSL configuration.
      
      The IBM JSSE provider reports its cipher suite with an SSL_ prefix, but accepts TLS_ prefixed suite names as an alias.  However, Jetty filters the requested ciphers based on the provider's reported supported suites, so the TLS_ versions are never passed through to JSSE causing an SSL handshake failure.
      
      Author: Tim Ellison <t.p.ellison@gmail.com>
      
      Closes #6282 from tellison/SSLFailure and squashes the following commits:
      
      8de8a3e [Tim Ellison] Update SecurityManagerSuite with new expected suite names
      96158b2 [Tim Ellison] Update the sample configs to use ciphers that are common to both the Oracle and IBM security providers.
      705421b [Tim Ellison] Merge branch 'master' of github.com:tellison/spark into SSLFailure
      68b9425 [Tim Ellison] Merge branch 'master' of https://github.com/apache/spark into SSLFailure
      b0c35f6 [Tim Ellison] [CORE] Add aliases used for cipher suites in IBM provider
      bf465807
    • Xiangrui Meng's avatar
      [SPARK-7912] [SPARK-7921] [MLLIB] Update OneHotEncoder to handle ML attributes... · 23452be9
      Xiangrui Meng authored
      [SPARK-7912] [SPARK-7921] [MLLIB] Update OneHotEncoder to handle ML attributes and change includeFirst to dropLast
      
      This PR contains two major changes to `OneHotEncoder`:
      
      1. more robust handling of ML attributes. If the input attribute is unknown, we look at the values to get the max category index
      2. change `includeFirst` to `dropLast` and leave the default to `true`. There are couple benefits:
      
          a. consistent with other tutorials of one-hot encoding (or dummy coding) (e.g., http://www.ats.ucla.edu/stat/mult_pkg/faq/general/dummy.htm)
          b. keep the indices unmodified in the output vector. If we drop the first, all indices will be shifted by 1.
          c. If users use `StringIndex`, the last element is the least frequent one.
      
      Sorry for including two changes in one PR! I'll update the user guide in another PR.
      
      jkbradley sryza
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6466 from mengxr/SPARK-7912 and squashes the following commits:
      
      a280dca [Xiangrui Meng] fix tests
      d8f234d [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into SPARK-7912
      171b276 [Xiangrui Meng] mention the difference between our impl vs sklearn's
      00dfd96 [Xiangrui Meng] update OneHotEncoder in Python
      208ddad [Xiangrui Meng] update OneHotEncoder to handle ML attributes and change includeFirst to dropLast
      23452be9
    • Reynold Xin's avatar
      [SPARK-7929] Turn whitespace checker on for more token types. · 97a60cf7
      Reynold Xin authored
      This is the last batch of changes to complete SPARK-7929.
      
      Previous related PRs:
      https://github.com/apache/spark/pull/6480
      https://github.com/apache/spark/pull/6478
      https://github.com/apache/spark/pull/6477
      https://github.com/apache/spark/pull/6476
      https://github.com/apache/spark/pull/6475
      https://github.com/apache/spark/pull/6474
      https://github.com/apache/spark/pull/6473
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6487 from rxin/whitespace-lint and squashes the following commits:
      
      b33d43d [Reynold Xin] [SPARK-7929] Turn whitespace checker on for more token types.
      97a60cf7
    • Patrick Wendell's avatar
      36067ce3
    • Tathagata Das's avatar
      [SPARK-7931] [STREAMING] Do not restart receiver when stopped · e714ecf2
      Tathagata Das authored
      Attempts to restart the socket receiver when it is supposed to be stopped causes undesirable error messages.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #6483 from tdas/SPARK-7931 and squashes the following commits:
      
      09aeee1 [Tathagata Das] Do not restart receiver when stopped
      e714ecf2
    • Xiangrui Meng's avatar
      [SPARK-7922] [MLLIB] use DataFrames for user/item factors in ALSModel · db951378
      Xiangrui Meng authored
      Expose user/item factors in DataFrames. This is to be more consistent with the pipeline API. It also helps maintain consistent APIs across languages. This PR also removed fitting params from `ALSModel`.
      
      coderxiang
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6468 from mengxr/SPARK-7922 and squashes the following commits:
      
      7bfb1d5 [Xiangrui Meng] update ALSModel in PySpark
      1ba5607 [Xiangrui Meng] use DataFrames for user/item factors in ALS
      db951378
    • Tathagata Das's avatar
      [SPARK-7930] [CORE] [STREAMING] Fixed shutdown hook priorities · cd3d9a5c
      Tathagata Das authored
      Shutdown hook for temp directories had priority 100 while SparkContext was 50. So the local root directory was deleted before SparkContext was shutdown. This leads to scary errors on running jobs, at the time of shutdown. This is especially a problem when running streaming examples, where Ctrl-C is the only way to shutdown.
      
      The fix in this PR is to make the temp directory shutdown priority lower than SparkContext, so that the temp dirs are the last thing to get deleted, after the SparkContext has been shut down. Also, the DiskBlockManager shutdown priority is change from default 100 to temp_dir_prio + 1, so that it gets invoked just before all temp dirs are cleared.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #6482 from tdas/SPARK-7930 and squashes the following commits:
      
      d7cbeb5 [Tathagata Das] Removed unnecessary line
      1514d0b [Tathagata Das] Fixed shutdown hook priorities
      cd3d9a5c
    • Kay Ousterhout's avatar
      [SPARK-7932] Fix misleading scheduler delay visualization · 04ddcd4d
      Kay Ousterhout authored
      The existing code rounds down to the nearest percent when computing the proportion
      of a task's time that was spent on each phase of execution, and then computes
      the scheduler delay proportion as 100 - sum(all other proportions).  As a result,
      a few extra percent can end up in the scheduler delay. This commit eliminates
      the rounding so that the time visualizations correspond properly to the real times.
      
      sarutak If you could take a look at this, that would be great! Not sure if there's a good
      reason to round here that I missed.
      
      cc shivaram
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #6484 from kayousterhout/SPARK-7932 and squashes the following commits:
      
      1723cc4 [Kay Ousterhout] [SPARK-7932] Fix misleading scheduler delay visualization
      04ddcd4d
  2. May 28, 2015
    • Xiangrui Meng's avatar
      [MINOR] fix RegressionEvaluator doc · 834e6995
      Xiangrui Meng authored
      `make clean html` under `python/doc` returns
      ~~~
      /Users/meng/src/spark/python/pyspark/ml/evaluation.py:docstring of pyspark.ml.evaluation.RegressionEvaluator.setParams:3: WARNING: Definition list ends without a blank line; unexpected unindent.
      ~~~
      
      harsha2010
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6469 from mengxr/fix-regression-evaluator-doc and squashes the following commits:
      
      91e2dad [Xiangrui Meng] fix RegressionEvaluator doc
      834e6995
    • Xiangrui Meng's avatar
      [SPARK-7926] [PYSPARK] use the official Pyrolite release · c45d58c1
      Xiangrui Meng authored
      Switch to the official Pyrolite release from the one published under `org.spark-project`. Thanks irmen for making the releases on Maven Central. We didn't upgrade to 4.6 because we don't have enough time for QA. I excludes `serpent` from its dependencies because we don't use it in Spark.
      ~~~
      [info]   +-net.jpountz.lz4:lz4:1.3.0
      [info]   +-net.razorvine:pyrolite:4.4
      [info]   +-net.sf.py4j:py4j:0.8.2.1
      ~~~
      
      davies
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6472 from mengxr/SPARK-7926 and squashes the following commits:
      
      7b3c6bf [Xiangrui Meng] use the official Pyrolite release
      c45d58c1
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for GraphX. · b069ad23
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6474 from rxin/whitespace-graphx and squashes the following commits:
      
      4d3cd26 [Reynold Xin] Fixed tests.
      869dde4 [Reynold Xin] [SPARK-7927] whitespace fixes for GraphX.
      b069ad23
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for core. · 7f7505d8
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6473 from rxin/whitespace-core and squashes the following commits:
      
      058195d [Reynold Xin] Fixed tests.
      fce11e9 [Reynold Xin] [SPARK-7927] whitespace fixes for core.
      7f7505d8
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for Catalyst module. · 8da560d7
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6476 from rxin/whitespace-catalyst and squashes the following commits:
      
      650409d [Reynold Xin] Fixed tests.
      51a9e5d [Reynold Xin] [SPARK-7927] whitespace fixes for Catalyst module.
      8da560d7
    • Reynold Xin's avatar
      [SPARK-7929] Remove Bagel examples & whitespace fix for examples. · 2881d14c
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6480 from rxin/whitespace-example and squashes the following commits:
      
      8a4a3d4 [Reynold Xin] [SPARK-7929] Remove Bagel examples & whitespace fix for examples.
      2881d14c
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for SQL core. · ff44c711
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6477 from rxin/whitespace-sql-core and squashes the following commits:
      
      ce6e369 [Reynold Xin] Fixed tests.
      6095fed [Reynold Xin] [SPARK-7927] whitespace fixes for SQL core.
      ff44c711
    • Xiangrui Meng's avatar
      [SPARK-7927] [MLLIB] Enforce whitespace for more tokens in style checker · 04616b1a
      Xiangrui Meng authored
      rxin
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6481 from mengxr/mllib-scalastyle and squashes the following commits:
      
      3ca4d61 [Xiangrui Meng] revert scalastyle config
      30961ba [Xiangrui Meng] adjust spaces in mllib/test
      571b5c5 [Xiangrui Meng] fix spaces in mllib
      04616b1a
    • Takuya UESHIN's avatar
      [SPARK-7826] [CORE] Suppress extra calling getCacheLocs. · 9b692bfd
      Takuya UESHIN authored
      There are too many extra call method `getCacheLocs` for `DAGScheduler`, which includes Akka communication.
      To improve `DAGScheduler` performance, suppress extra calling the method.
      
      In my application with over 1200 stages, the execution time became 3.8 min from 8.5 min with my patch.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #6352 from ueshin/issues/SPARK-7826 and squashes the following commits:
      
      3d4d036 [Takuya UESHIN] Modify a test and the documentation.
      10b1b22 [Takuya UESHIN] Simplify the unit test.
      d858b59 [Takuya UESHIN] Move the storageLevel check inside the if (!cacheLocs.contains(rdd.id)) block.
      6f3125c [Takuya UESHIN] Fix scalastyle.
      b9c835c [Takuya UESHIN] Put the condition that checks if the RDD has uncached partition or not into variable for readability.
      f87f2ec [Takuya UESHIN] Get cached locations from block manager only if the storage level of the RDD is not StorageLevel.NONE.
      8248386 [Takuya UESHIN] Revert "Suppress extra calling getCacheLocs."
      a4d944a [Takuya UESHIN] Add an unit test.
      9a80fad [Takuya UESHIN] Suppress extra calling getCacheLocs.
      9b692bfd
    • Kay Ousterhout's avatar
      [SPARK-7933] Remove Patrick's username/pw from merge script · 66c49ed6
      Kay Ousterhout authored
      Looks like this was added by accident when pwendell merged a commit back in September: fe2b1d6a
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #6485 from kayousterhout/SPARK-7933 and squashes the following commits:
      
      7c6164a [Kay Ousterhout] [SPARK-7933] Remove Patrick's username/pw from merge script
      66c49ed6
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for Hive and ThriftServer. · ee6a0e12
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6478 from rxin/whitespace-hive and squashes the following commits:
      
      e01b0e0 [Reynold Xin] Fixed tests.
      a3bba22 [Reynold Xin] [SPARK-7927] whitespace fixes for Hive and ThriftServer.
      ee6a0e12
    • Reynold Xin's avatar
      [SPARK-7927] whitespace fixes for streaming. · 3af0b313
      Reynold Xin authored
      So we can enable a whitespace enforcement rule in the style checker to save code review time.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6475 from rxin/whitespace-streaming and squashes the following commits:
      
      810dae4 [Reynold Xin] Fixed tests.
      89068ad [Reynold Xin] [SPARK-7927] whitespace fixes for streaming.
      3af0b313
    • Xusen Yin's avatar
      [SPARK-7577] [ML] [DOC] add bucketizer doc · 1bd63e82
      Xusen Yin authored
      CC jkbradley
      
      Author: Xusen Yin <yinxusen@gmail.com>
      
      Closes #6451 from yinxusen/SPARK-7577 and squashes the following commits:
      
      e2dc32e [Xusen Yin] rename colums
      e350e49 [Xusen Yin] add all demos
      006ddf1 [Xusen Yin] add java test
      3238481 [Xusen Yin] add bucketizer
      1bd63e82
    • Yin Huai's avatar
      [SPARK-7853] [SQL] Fix HiveContext in Spark Shell · 572b62ca
      Yin Huai authored
      https://issues.apache.org/jira/browse/SPARK-7853
      
      This fixes the problem introduced by my change in https://github.com/apache/spark/pull/6435, which causes that Hive Context fails to create in spark shell because of the class loader issue.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6459 from yhuai/SPARK-7853 and squashes the following commits:
      
      37ad33e [Yin Huai] Do not use hiveQlTable at all.
      47cdb6d [Yin Huai] Move hiveconf.set to the end of setConf.
      005649b [Yin Huai] Update comment.
      35d86f3 [Yin Huai] Access TTable directly to make sure Hive will not internally use any metastore utility functions.
      3737766 [Yin Huai] Recursively find all jars.
      572b62ca
    • Reynold Xin's avatar
      Remove SizeEstimator from o.a.spark package. · 0077af22
      Reynold Xin authored
      See comments on https://github.com/apache/spark/pull/3913
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6471 from rxin/sizeestimator and squashes the following commits:
      
      c057095 [Reynold Xin] Fixed import.
      2da478b [Reynold Xin] Remove SizeEstimator from o.a.spark package.
      0077af22
    • Xiangrui Meng's avatar
      [SPARK-7198] [MLLIB] VectorAssembler should output ML attributes · 7859ab65
      Xiangrui Meng authored
      `VectorAssembler` should carry over ML attributes. For unknown attributes, we assume numeric values. This PR handles the following cases:
      
      1. DoubleType with ML attribute: carry over
      2. DoubleType without ML attribute: numeric value
      3. Scalar type: numeric value
      4. VectorType with all ML attributes: carry over and update names
      5. VectorType with number of ML attributes: assume all numeric
      6. VectorType without ML attributes: check the first row and get the number of attributes
      
      jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6452 from mengxr/SPARK-7198 and squashes the following commits:
      
      a9d2469 [Xiangrui Meng] add space
      facdb1f [Xiangrui Meng] VectorAssembler should output ML attributes
      7859ab65
    • Mike Dusenberry's avatar
      [DOCS] Fixing broken "IDE setup" link in the Building Spark documentation. · 3e312a5e
      Mike Dusenberry authored
      The location of the IDE setup information has changed, so this just updates the link on the Building Spark page.
      
      Author: Mike Dusenberry <dusenberrymw@gmail.com>
      
      Closes #6467 from dusenberrymw/Fix_Broken_Link_On_Building_Spark_Doc and squashes the following commits:
      
      75c533a [Mike Dusenberry] Fixing broken "IDE setup" link in the Building Spark documentation by pointing to new location.
      3e312a5e
    • Li Yao's avatar
      [MINOR] Fix the a minor bug in PageRank Example. · c771589c
      Li Yao authored
      Fix the bug that entering only 1 arg will cause array out of bounds exception in PageRank example.
      
      Author: Li Yao <hnkfliyao@gmail.com>
      
      Closes #6455 from lastland/patch-1 and squashes the following commits:
      
      de06128 [Li Yao] Fix the bug that entering only 1 arg will cause array out of bounds exception.
      c771589c
    • Xiangrui Meng's avatar
      [SPARK-7911] [MLLIB] A workaround for VectorUDT serialize (or deserialize)... · 530efe3e
      Xiangrui Meng authored
      [SPARK-7911] [MLLIB] A workaround for VectorUDT serialize (or deserialize) being called multiple times
      
      ~~A PythonUDT shouldn't be serialized into external Scala types in PythonRDD. I'm not sure whether this should fix one of the bugs related to SQL UDT/UDF in PySpark.~~
      
      The fix above didn't work. So I added a workaround for this. If a Python UDF is applied to a Python UDT. This will put the Python SQL types as inputs. Still incorrect, but at least it doesn't throw exceptions on the Scala side. davies harsha2010
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #6442 from mengxr/SPARK-7903 and squashes the following commits:
      
      c257d2a [Xiangrui Meng] add a workaround for VectorUDT
      530efe3e
    • zsxwing's avatar
      [SPARK-7895] [STREAMING] [EXAMPLES] Move Kafka examples from scala-2.10/src to src · 000df2f0
      zsxwing authored
      Since `spark-streaming-kafka` now is published for both Scala 2.10 and 2.11, we can move `KafkaWordCount` and `DirectKafkaWordCount` from `examples/scala-2.10/src/` to `examples/src/` so that they will appear in `spark-examples-***-jar` for Scala 2.11.
      
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #6436 from zsxwing/SPARK-7895 and squashes the following commits:
      
      c6052f1 [zsxwing] Update examples/pom.xml
      0bcfa87 [zsxwing] Fix the sleep time
      b9d1256 [zsxwing] Move Kafka examples from scala-2.10/src to src
      000df2f0
    • zuxqoj's avatar
      [SPARK-7782] fixed sort arrow issue · e838a25b
      zuxqoj authored
      Current behaviour::
      In spark UI
      ![screen shot 2015-05-27 at 3 27 51 pm](https://cloud.githubusercontent.com/assets/3919211/7837541/47d330ba-04a5-11e5-89d1-e5b11da1a513.png)
      
      In YARN
      ![screen shot 2015-05-27 at 3](https://cloud.githubusercontent.com/assets/3919211/7837594/aebd1d36-04a5-11e5-8216-86e03c07d2bd.png)
      
      In jira
      ![screen shot 2015-05-27 at 3_2](https://cloud.githubusercontent.com/assets/3919211/7837616/d3fedce2-04a5-11e5-9e68-960ed54e5d83.png)
      
      Author: zuxqoj <sbshekhar@gmail.com>
      
      Closes #6437 from zuxqoj/SPARK-7782_PR and squashes the following commits:
      
      cd068b9 [zuxqoj] [SPARK-7782] fixed sort arrow issue
      e838a25b
    • Matt Wise's avatar
      [DOCS] Fix typo in documentation for Java UDF registration · 35410614
      Matt Wise authored
      This contribution is my original work and I license the work to the project under the project's open source license
      
      Author: Matt Wise <mwise@quixey.com>
      
      Closes #6447 from wisematthew/fix-typo-in-java-udf-registration-doc and squashes the following commits:
      
      e7ef5f7 [Matt Wise] Fix typo in documentation for Java UDF registration
      35410614
    • Sandy Ryza's avatar
      [SPARK-7896] Allow ChainedBuffer to store more than 2 GB · bd11b01e
      Sandy Ryza authored
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #6440 from sryza/sandy-spark-7896 and squashes the following commits:
      
      49d8a0d [Sandy Ryza] Fix bug introduced when reading over record boundaries
      6006856 [Sandy Ryza] Fix overflow issues
      006b4b2 [Sandy Ryza] Fix scalastyle by removing non ascii characters
      8b000ca [Sandy Ryza] Add ascii art to describe layout of data in metaBuffer
      f2053c0 [Sandy Ryza] Fix negative overflow issue
      0368c78 [Sandy Ryza] Initialize size as 0
      a5a4820 [Sandy Ryza] Use explicit types for all numbers in ChainedBuffer
      b7e0213 [Sandy Ryza] SPARK-7896. Allow ChainedBuffer to store more than 2 GB
      bd11b01e
Loading