Skip to content
Snippets Groups Projects
  1. Feb 09, 2015
    • Sean Owen's avatar
      SPARK-4267 [YARN] Failing to launch jobs on Spark on YARN with Hadoop 2.5.0 or later · de780604
      Sean Owen authored
      Before passing to YARN, escape arguments in "extraJavaOptions" args, in order to correctly handle cases like -Dfoo="one two three". Also standardize how these args are handled and ensure that individual args are treated as stand-alone args, not one string.
      
      vanzin andrewor14
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #4452 from srowen/SPARK-4267.2 and squashes the following commits:
      
      c8297d2 [Sean Owen] Before passing to YARN, escape arguments in "extraJavaOptions" args, in order to correctly handle cases like -Dfoo="one two three". Also standardize how these args are handled and ensure that individual args are treated as stand-alone args, not one string.
      de780604
    • Sandy Ryza's avatar
      SPARK-2149. [MLLIB] Univariate kernel density estimation · 0793ee1b
      Sandy Ryza authored
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #1093 from sryza/sandy-spark-2149 and squashes the following commits:
      
      5f06b33 [Sandy Ryza] More review comments
      0f73060 [Sandy Ryza] Respond to Sean's review comments
      0dfa005 [Sandy Ryza] SPARK-2149. Univariate kernel density estimation
      0793ee1b
    • Nicholas Chammas's avatar
      [SPARK-5473] [EC2] Expose SSH failures after status checks pass · 4dfe180f
      Nicholas Chammas authored
      If there is some fatal problem with launching a cluster, `spark-ec2` just hangs without giving the user useful feedback on what the problem is.
      
      This PR exposes the output of the SSH calls to the user if the SSH test fails during cluster launch for any reason but the instance status checks are all green. It also removes the growing trail of dots while waiting in favor of a fixed 3 dots.
      
      For example:
      
      ```
      $ ./ec2/spark-ec2 -k key -i /incorrect/path/identity.pem --instance-type m3.medium --slaves 1 --zone us-east-1c launch "spark-test"
      Setting up security groups...
      Searching for existing cluster spark-test...
      Spark AMI: ami-35b1885c
      Launching instances...
      Launched 1 slaves in us-east-1c, regid = r-7dadd096
      Launched master in us-east-1c, regid = r-fcadd017
      Waiting for cluster to enter 'ssh-ready' state...
      Warning: SSH connection error. (This could be temporary.)
      Host: 127.0.0.1
      SSH return code: 255
      SSH output: Warning: Identity file /incorrect/path/identity.pem not accessible: No such file or directory.
      Warning: Permanently added '127.0.0.1' (RSA) to the list of known hosts.
      Permission denied (publickey).
      ```
      
      This should give users enough information when some unrecoverable error occurs during launch so they can know to abort the launch. This will help avoid situations like the ones reported [here on Stack Overflow](http://stackoverflow.com/q/28002443/) and [here on the user list](http://mail-archives.apache.org/mod_mbox/spark-user/201501.mbox/%3C1422323829398-21381.postn3.nabble.com%3E), where the users couldn't tell what the problem was because it was being hidden by `spark-ec2`.
      
      This is a usability improvement that should be backported to 1.2.
      
      Resolves [SPARK-5473](https://issues.apache.org/jira/browse/SPARK-5473).
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      
      Closes #4262 from nchammas/expose-ssh-failure and squashes the following commits:
      
      8bda6ed [Nicholas Chammas] default to print SSH output
      2b92534 [Nicholas Chammas] show SSH output after status check pass
      4dfe180f
    • Xiangrui Meng's avatar
      [SPARK-5539][MLLIB] LDA guide · 855d12ac
      Xiangrui Meng authored
      This is the LDA user guide from jkbradley with Java and Scala code example.
      
      Author: Xiangrui Meng <meng@databricks.com>
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #4465 from mengxr/lda-guide and squashes the following commits:
      
      6dcb7d1 [Xiangrui Meng] update java example in the user guide
      76169ff [Xiangrui Meng] update java example
      36c3ae2 [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into lda-guide
      c2a1efe [Joseph K. Bradley] Added LDA programming guide, plus Java example (which is in the guide and probably should be removed).
      855d12ac
    • Hung Lin's avatar
      [SPARK-5472][SQL] Fix Scala code style · 4575c564
      Hung Lin authored
      Fix Scala code style.
      
      Author: Hung Lin <hung@zoomdata.com>
      
      Closes #4464 from hunglin/SPARK-5472 and squashes the following commits:
      
      ef7a3b3 [Hung Lin] SPARK-5472: fix scala style
      4575c564
  2. Feb 08, 2015
    • Sean Owen's avatar
      SPARK-4405 [MLLIB] Matrices.* construction methods should check for rows x cols overflow · 4396dfb3
      Sean Owen authored
      Check that size of dense matrix array is not beyond Int.MaxValue in Matrices.* methods. jkbradley this should be an easy one. Review and/or merge as you see fit.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #4461 from srowen/SPARK-4405 and squashes the following commits:
      
      c67574e [Sean Owen] Check that size of dense matrix array is not beyond Int.MaxValue in Matrices.* methods
      4396dfb3
    • Joseph K. Bradley's avatar
      [SPARK-5660][MLLIB] Make Matrix apply public · c1716118
      Joseph K. Bradley authored
      This is #4447 with `override`.
      
      Closes #4447
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #4462 from mengxr/SPARK-5660 and squashes the following commits:
      
      f82c8d6 [Xiangrui Meng] add override to matrix.apply
      91cedde [Joseph K. Bradley] made matrix apply public
      c1716118
    • Reynold Xin's avatar
      [SPARK-5643][SQL] Add a show method to print the content of a DataFrame in tabular format. · a052ed42
      Reynold Xin authored
      An example:
      ```
      year  month AVG('Adj Close) MAX('Adj Close)
      1980  12    0.503218        0.595103
      1981  01    0.523289        0.570307
      1982  02    0.436504        0.475256
      1983  03    0.410516        0.442194
      1984  04    0.450090        0.483521
      ```
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #4416 from rxin/SPARK-5643 and squashes the following commits:
      
      d0e0d6e [Reynold Xin] [SQL] Minor update to data source and statistics documentation.
      269da83 [Reynold Xin] Updated isLocal comment.
      2cf3c27 [Reynold Xin] Moved logic into optimizer.
      1a04d8b [Reynold Xin] [SPARK-5643][SQL] Add a show method to print the content of a DataFrame in columnar format.
      a052ed42
    • Sam Halliday's avatar
      SPARK-5665 [DOCS] Update netlib-java documentation · 56aff4bd
      Sam Halliday authored
      I am the author of netlib-java and I found this documentation to be out of date. Some main points:
      
      1. Breeze has not depended on jBLAS for some time
      2. netlib-java provides a pure JVM implementation as the fallback (the original docs did not appear to be aware of this, claiming that gfortran was necessary)
      3. The licensing issue is not just about LGPL: optimised natives have proprietary licenses. Building with the LGPL flag turned on really doesn't help you get past this.
      4. I really think it's best to direct people to my detailed setup guide instead of trying to compress it into one sentence. It is different for each architecture, each OS, and for each backend.
      
      I hope this helps to clear things up :smile:
      
      Author: Sam Halliday <sam.halliday@Gmail.com>
      Author: Sam Halliday <sam.halliday@gmail.com>
      
      Closes #4448 from fommil/patch-1 and squashes the following commits:
      
      18cda11 [Sam Halliday] remove link to skillsmatters at request of @mengxr
      a35e4a9 [Sam Halliday] reword netlib-java/breeze docs
      56aff4bd
    • Xiangrui Meng's avatar
      [SPARK-5598][MLLIB] model save/load for ALS · 5c299c58
      Xiangrui Meng authored
      following #4233. jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #4422 from mengxr/SPARK-5598 and squashes the following commits:
      
      a059394 [Xiangrui Meng] SaveLoad not extending Loader
      14b7ea6 [Xiangrui Meng] address comments
      f487cb2 [Xiangrui Meng] add unit tests
      62fc43c [Xiangrui Meng] implement save/load for MFM
      5c299c58
    • Yin Huai's avatar
      [SQL] Set sessionState in QueryExecution. · 804949d5
      Yin Huai authored
      This PR sets the SessionState in HiveContext's QueryExecution. So, we can make sure that SessionState.get can return the SessionState every time.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #4445 from yhuai/setSessionState and squashes the following commits:
      
      769c9f1 [Yin Huai] Remove unused import.
      439f329 [Yin Huai] Try again.
      427a0c9 [Yin Huai] Set SessionState everytime when we create a QueryExecution in HiveContext.
      a3b7793 [Yin Huai] Set sessionState when dealing with CreateTableAsSelect.
      804949d5
    • medale's avatar
      [SPARK-3039] [BUILD] Spark assembly for new hadoop API (hadoop 2) contai... · 75fdccca
      medale authored
      ...ns avro-mapred for
      
      hadoop 1 API had been marked as resolved but did not work for at least some
      builds due to version conflicts using avro-mapred-1.7.5.jar and
      avro-mapred-1.7.6-hadoop2.jar (the correct version) when building for hadoop2.
      
      sql/hive/pom.xml org.spark-project.hive:hive-exec's depends on 1.7.5:
      
      Building Spark Project Hive 1.2.0
      [INFO] ------------------------------------------------------------------------
      [INFO]
      [INFO] --- maven-dependency-plugin:2.4:tree (default-cli)  spark-hive_2.10 ---
      [INFO] org.apache.spark:spark-hive_2.10:jar:1.2.0
      [INFO] +- org.spark-project.hive:hive-exec:jar:0.13.1a:compile
      [INFO] |  \- org.apache.avro:avro-mapred:jar:1.7.5:compile
      [INFO] \- org.apache.avro:avro-mapred:jar:hadoop2:1.7.6:compile
      [INFO]
      
      Excluding this dependency allows the explicitly listed avro-mapred dependency
      to be picked up.
      
      Author: medale <medale94@yahoo.com>
      
      Closes #4315 from medale/avro-hadoop2 and squashes the following commits:
      
      1ab4fa3 [medale] Merge branch 'master' into avro-hadoop2
      9d85e2a [medale] Merge remote-tracking branch 'upstream/master' into avro-hadoop2
      51b9c2a [medale] [SPARK-3039] [BUILD] Spark assembly for new hadoop API (hadoop 2) contains avro-mapred for hadoop 1 API had been marked as resolved but did not work for at least some builds due to version conflicts using avro-mapred-1.7.5.jar and avro-mapred-1.7.6-hadoop2.jar (the correct version) when building for hadoop2.
      75fdccca
    • Kirill A. Korinskiy's avatar
      [SPARK-5672][Web UI] Don't return `ERROR 500` when have missing args · 23a99dab
      Kirill A. Korinskiy authored
      Spark web UI return `HTTP ERROR 500` when GET arguments is missing.
      
      Author: Kirill A. Korinskiy <catap@catap.ru>
      
      Closes #4239 from catap/ui_500 and squashes the following commits:
      
      520e180 [Kirill A. Korinskiy] [SPARK-5672][Web UI] Return `HTTP ERROR 400` when have missing args
      23a99dab
    • mbittmann's avatar
      [SPARK-5656] Fail gracefully for large values of k and/or n that will ex... · 48783136
      mbittmann authored
      ...ceed max int.
      
      Large values of k and/or n in EigenValueDecomposition.symmetricEigs will result in array initialization to a value larger than Integer.MAX_VALUE in the following: var v = new Array[Double](n * ncv)
      
      Author: mbittmann <mbittmann@gmail.com>
      Author: bittmannm <mark.bittmann@agilex.com>
      
      Closes #4433 from mbittmann/master and squashes the following commits:
      
      ee56e05 [mbittmann] [SPARK-5656] Combine checks into simple message
      e49cbbb [mbittmann] [SPARK-5656] Simply error message
      860836b [mbittmann] Array size check updates based on code review
      a604816 [bittmannm] [SPARK-5656] Fail gracefully for large values of k and/or n that will exceed max int.
      48783136
    • liuchang0812's avatar
      [SPARK-5366][EC2] Check the mode of private key · 6fb141e2
      liuchang0812 authored
      Check the mode of private key file.
      
      Author: liuchang0812 <liuchang0812@gmail.com>
      
      Closes #4162 from Liuchang0812/ec2-script and squashes the following commits:
      
      fc37355 [liuchang0812] quota file name
      01ed464 [liuchang0812] more output
      ce2a207 [liuchang0812] pep8
      f44efd2 [liuchang0812] move code to real_main
      8475a54 [liuchang0812] fix bug
      cd61a1a [liuchang0812] import stat
      c106cb2 [liuchang0812] fix trivis bug
      89c9953 [liuchang0812] more output about checking private key
      1177a90 [liuchang0812] remove commet
      41188ab [liuchang0812] check the mode of private key
      6fb141e2
  3. Feb 07, 2015
    • Josh Rosen's avatar
      [SPARK-5671] Upgrade jets3t to 0.9.2 in hadoop-2.3 and 2.4 profiles · 5de14cc2
      Josh Rosen authored
      Upgrading from jets3t 0.9.0 to 0.9.2 fixes a dependency issue that was
      causing UISeleniumSuite to fail with ClassNotFoundExceptions when run
      the hadoop-2.3 or hadoop-2.4 profiles.
      
      The jets3t release notes can be found at http://www.jets3t.org/RELEASE_NOTES.html
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #4454 from JoshRosen/SPARK-5671 and squashes the following commits:
      
      fa6cb3e [Josh Rosen] [SPARK-5671] Upgrade jets3t to 0.9.2 in hadoop-2.3 and 2.4 profiles
      5de14cc2
    • Zhan Zhang's avatar
      [SPARK-5108][BUILD] Jackson dependency management for Hadoop-2.6.0 support · ecbbed2e
      Zhan Zhang authored
      There is dependency compatibility issue. Currently hadoop-2.6.0 use 1.9.13 for jackson. Upgrade to the same version to make it consistent.
      
      Author: Zhan Zhang <zhazhan@gmail.com>
      
      Closes #3938 from zhzhan/spark5108 and squashes the following commits:
      
      0080a84 [Zhan Zhang] change to upgrade jackson version only in hadoop-2.x
      0b9bad6 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark into spark5108
      917600a [Zhan Zhang] solve conflicts
      f7064d0 [Zhan Zhang] hadoop2.6 dependency management fix
      fc56b25 [Zhan Zhang] squash all commits
      3bf966c [Zhan Zhang] test
      ecbbed2e
    • Jacek Lewandowski's avatar
      SPARK-5408: Use -XX:MaxPermSize specified by user instead of default in ... · dd4cb33a
      Jacek Lewandowski authored
      ...ExecutorRunner and DriverRunner
      
      Author: Jacek Lewandowski <lewandowski.jacek@gmail.com>
      
      Closes #4203 from jacek-lewandowski/SPARK-5408-1.3 and squashes the following commits:
      
      d913686 [Jacek Lewandowski] SPARK-5408: Use -XX:MaxPermSize specified by used instead of default in ExecutorRunner and DriverRunner
      dd4cb33a
    • Michael Armbrust's avatar
      [BUILD] Add the ability to launch spark-shell from SBT. · e9a4fe12
      Michael Armbrust authored
      Now you can quickly launch the spark-shell without building an assembly.  For quick development iteration run `build/sbt ~sparkShell` and calling exit will relaunch with any changes.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #4438 from marmbrus/sparkShellSbt and squashes the following commits:
      
      b4e44fe [Michael Armbrust] [BUILD] Add the ability to launch spark-shell from SBT.
      e9a4fe12
  4. Feb 06, 2015
    • Andrew Or's avatar
      [SPARK-5388] Provide a stable application submission gateway for standalone cluster mode · 1390e56f
      Andrew Or authored
      The goal is to provide a stable, REST-based application submission gateway that is not inherently based on Akka, which is unstable across versions. This PR targets standalone cluster mode, but is implemented in a general enough manner that can be potentially extended to other modes in the future. Client mode is currently not included in the changes here because there are many more Akka messages exchanged there.
      
      As of the changes here, the Master will advertise two ports, 7077 and 6066. We need to keep around the old one (7077) for client mode and older versions of Spark submit. However, all new versions of Spark submit will use the REST gateway (6066).
      
      By the way this includes ~700 lines of tests and ~200 lines of license.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #4216 from andrewor14/rest and squashes the following commits:
      
      8d7ce07 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      6f0c597 [Andrew Or] Use nullable fields for integer and boolean values
      dfe4bd7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      b9e2a08 [Andrew Or] Minor comments
      02b5cea [Andrew Or] Fix tests
      d2b1ef8 [Andrew Or] Comment changes + minor code refactoring across the board
      9c82a36 [Andrew Or] Minor comment and wording updates
      b4695e7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      c9a8ad7 [Andrew Or] Do not include appResource and mainClass as properties
      6fc7670 [Andrew Or] Report REST server response back to the user
      40e6095 [Andrew Or] Pass submit parameters through system properties
      cbd670b [Andrew Or] Include unknown fields, if any, in server response
      9fee16f [Andrew Or] Include server protocol version on mismatch
      09f873a [Andrew Or] Fix style
      8188e61 [Andrew Or] Upgrade Jackson from 2.3.0 to 2.4.4
      37538e0 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      9165ae8 [Andrew Or] Fall back to Akka if endpoint was not REST
      252d53c [Andrew Or] Clean up server error handling behavior further
      c643f64 [Andrew Or] Fix style
      bbbd329 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      792e112 [Andrew Or] Use specific HTTP response codes on error
      f98660b [Andrew Or] Version the protocol and include it in REST URL
      721819f [Andrew Or] Provide more REST-like interface for submit/kill/status
      581f7bf [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      9e0d1af [Andrew Or] Move some classes around to reduce number of files (minor)
      42e5de4 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      1f1c03f [Andrew Or] Use Jackson's DefaultScalaModule to simplify messages
      9229433 [Andrew Or] Reduce duplicate naming in REST field
      ade28fd [Andrew Or] Clean up REST response output in Spark submit
      b2fef8b [Andrew Or] Abstract the success field to the general response
      6c57b4b [Andrew Or] Increase timeout in end-to-end tests
      bf696ff [Andrew Or] Add checks for enabling REST when using kill/status
      7ee6737 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      e2f7f5f [Andrew Or] Provide more safeguard against missing fields
      9581df7 [Andrew Or] Clean up uses of exceptions
      914fdff [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      e2104e6 [Andrew Or] stable -> rest
      3db7379 [Andrew Or] Fix comments and name fields for better error messages
      8d43486 [Andrew Or] Replace SubmitRestProtocolAction with class name
      df90e8b [Andrew Or] Use Jackson for JSON de/serialization
      d7a1f9f [Andrew Or] Fix local cluster tests
      efa5e18 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      e42c131 [Andrew Or] Add end-to-end tests for standalone REST protocol
      837475b [Andrew Or] Show the REST port on the Master UI
      d8d3717 [Andrew Or] Use a daemon thread pool for REST server
      6568ca5 [Andrew Or] Merge branch 'master' of github.com:apache/spark into rest
      77774ba [Andrew Or] Minor fixes
      206cae4 [Andrew Or] Refactor and add tests for the REST protocol
      63c05b3 [Andrew Or] Remove MASTER as a field (minor)
      9e21b72 [Andrew Or] Action -> SparkSubmitAction (minor)
      51c5ca6 [Andrew Or] Distinguish client and server side Spark versions
      b44e103 [Andrew Or] Implement status requests + fix validation behavior
      120ab9d [Andrew Or] Support kill and request driver status through SparkSubmit
      544de1d [Andrew Or] Major clean ups in code and comments
      e958cae [Andrew Or] Supported nested values in messages
      484bd21 [Andrew Or] Specify an ordering for fields in SubmitDriverRequestMessage
      6ff088d [Andrew Or] Rename classes to generalize REST protocol
      af9d9cb [Andrew Or] Integrate REST protocol in standalone mode
      53e7c0e [Andrew Or] Initial client, server, and all the messages
      1390e56f
    • Grzegorz Dubicki's avatar
      SPARK-5403: Ignore UserKnownHostsFile in SSH calls · e772b4e4
      Grzegorz Dubicki authored
      See https://issues.apache.org/jira/browse/SPARK-5403
      
      Author: Grzegorz Dubicki <grzegorz.dubicki@gmail.com>
      
      Closes #4196 from grzegorz-dubicki/SPARK-5403 and squashes the following commits:
      
      a7d863f [Grzegorz Dubicki] Resolve start command hanging issue
      e772b4e4
    • Xiangrui Meng's avatar
      [SPARK-5601][MLLIB] make streaming linear algorithms Java-friendly · 0e23ca9f
      Xiangrui Meng authored
      Overload `trainOn`, `predictOn`, and `predictOnValues`.
      
      CC freeman-lab
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #4432 from mengxr/streaming-java and squashes the following commits:
      
      6a79b85 [Xiangrui Meng] add java test for streaming logistic regression
      2d7b357 [Xiangrui Meng] organize imports
      1f662b3 [Xiangrui Meng] make streaming linear algorithms Java-friendly
      0e23ca9f
    • Cheng Lian's avatar
      [SQL] [Minor] HiveParquetSuite was disabled by mistake, re-enable them · c4021401
      Cheng Lian authored
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4440)
      <!-- Reviewable:end -->
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #4440 from liancheng/parquet-oops and squashes the following commits:
      
      f21ede4 [Cheng Lian] HiveParquetSuite was disabled by mistake, re-enable them.
      c4021401
    • Michael Armbrust's avatar
      [SQL] Use TestSQLContext in Java tests · 76c4bf59
      Michael Armbrust authored
      Sometimes tests were failing due to the creation of multiple `SparkContext`s in a single JVM.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #4441 from marmbrus/javaTests and squashes the following commits:
      
      657b1e0 [Michael Armbrust] [SQL] Use TestSQLContext in Java tests
      76c4bf59
    • lianhuiwang's avatar
      [SPARK-4994][network]Cleanup removed executors' ShuffleInfo in yarn shuffle service · 61073f83
      lianhuiwang authored
      when the application is completed, yarn's nodemanager can remove application's local-dirs.but all executors' metadata of completed application havenot be removed. now it lets yarn ShuffleService to have much more memory to store Executors' ShuffleInfo. so these metadata need to be removed.
      
      Author: lianhuiwang <lianhuiwang09@gmail.com>
      
      Closes #3828 from lianhuiwang/SPARK-4994 and squashes the following commits:
      
      f3ba1d2 [lianhuiwang] Cleanup removed executors' ShuffleInfo
      61073f83
    • huangzhaowei's avatar
      [SPARK-5444][Network]Add a retry to deal with the conflict port in netty server. · 2bda1c1d
      huangzhaowei authored
      If the `spark.blockMnager.port` had conflicted with a specific port, Spark will throw an exception and exit.
      So add a retry to avoid this situation.
      
      Author: huangzhaowei <carlmartinmax@gmail.com>
      
      Closes #4240 from SaintBacchus/NettyPortConflict and squashes the following commits:
      
      cc926d2 [huangzhaowei] Add a retry to deal with the conflict port in netty server.
      2bda1c1d
    • Kostas Sakellis's avatar
      [SPARK-4874] [CORE] Collect record count metrics · dcd1e42d
      Kostas Sakellis authored
      Collects record counts for both Input/Output and Shuffle Metrics. For the input/output metrics, it just appends the counter every time the iterators get accessed.
      
      For shuffle on the write side, we count the metrics post aggregation (after a map side combine) and on the read side we count the metrics pre aggregation. This allows both the bytes read/written metrics and the records read/written to line up.
      
      For backwards compatibility, if we deserialize an older event that doesn't have record metrics, we set the metric to -1.
      
      Author: Kostas Sakellis <kostas@cloudera.com>
      
      Closes #4067 from ksakellis/kostas-spark-4874 and squashes the following commits:
      
      bd919be [Kostas Sakellis] Changed 'Records Read' in shuffleReadMetrics json output to 'Total Records Read'
      dad4d57 [Kostas Sakellis] Add a comment and check to BlockObjectWriter so that it cannot be reopend.
      6f236a1 [Kostas Sakellis] Renamed _recordsWritten in ShuffleWriteMetrics to be more consistent
      70620a0 [Kostas Sakellis] CR Feedback
      17faa3a [Kostas Sakellis] Removed AtomicLong in favour of using Long
      b6f9923 [Kostas Sakellis] Merge AfterNextInterceptingIterator with InterruptableIterator to save a function call
      46c8186 [Kostas Sakellis] Combined Bytes and # records into one column
      57551c1 [Kostas Sakellis] Conforms to SPARK-3288
      6cdb44e [Kostas Sakellis] Removed the generic InterceptingIterator and repalced it with specific implementation
      1aa273c [Kostas Sakellis] CR Feedback
      1bb78b1 [Kostas Sakellis] [SPARK-4874] [CORE] Collect record count metrics
      dcd1e42d
    • Michael Armbrust's avatar
      [HOTFIX] Fix the maven build after adding sqlContext to spark-shell · 57961567
      Michael Armbrust authored
      Follow up to #4387 to fix the build break.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #4443 from marmbrus/fixMaven and squashes the following commits:
      
      1eeba7d [Michael Armbrust] try again
      7f5fb15 [Michael Armbrust] [HOTFIX] Fix the maven build after adding sqlContext to spark-shell
      57961567
    • Marcelo Vanzin's avatar
      [SPARK-5600] [core] Clean up FsHistoryProvider test, fix app sort order. · 5687bab8
      Marcelo Vanzin authored
      Clean up some test setup code to remove duplicate instantiation of the
      provider. Also make sure unfinished apps are sorted correctly.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #4370 from vanzin/SPARK-5600 and squashes the following commits:
      
      0d048d5 [Marcelo Vanzin] Cleanup test code a bit.
      2585119 [Marcelo Vanzin] Review feedback.
      8b97544 [Marcelo Vanzin] Merge branch 'master' into SPARK-5600
      be979e9 [Marcelo Vanzin] Merge branch 'master' into SPARK-5600
      298371c [Marcelo Vanzin] [SPARK-5600] [core] Clean up FsHistoryProvider test, fix app sort order.
      5687bab8
    • Kashish Jain's avatar
      SPARK-5613: Catch the ApplicationNotFoundException exception to avoid thread... · ca66159a
      Kashish Jain authored
      SPARK-5613: Catch the ApplicationNotFoundException exception to avoid thread from getting killed on yarn restart.
      
      [SPARK-5613] Added a  catch block to catch the ApplicationNotFoundException. Without this catch block the thread gets killed on occurrence of this exception. This Exception occurs when yarn restarts and tries to find an application id for a spark job which got interrupted due to yarn getting stopped.
      See the stacktrace in the bug for more details.
      
      Author: Kashish Jain <kashish.jain@guavus.com>
      
      Closes #4392 from kasjain/branch-1.2 and squashes the following commits:
      
      4831000 [Kashish Jain] SPARK-5613: Catch the ApplicationNotFoundException exception to avoid thread from getting killed on yarn restart.
      ca66159a
    • Vladimir Vladimirov's avatar
      SPARK-5633 pyspark saveAsTextFile support for compression codec · b3872e00
      Vladimir Vladimirov authored
      See https://issues.apache.org/jira/browse/SPARK-5633 for details
      
      Author: Vladimir Vladimirov <vladimir.vladimirov@magnetic.com>
      
      Closes #4403 from smartkiwi/master and squashes the following commits:
      
      94c014e [Vladimir Vladimirov] SPARK-5633 pyspark saveAsTextFile support for compression codec
      b3872e00
    • Xiangrui Meng's avatar
      [HOTFIX][MLLIB] fix a compilation error with java 6 · 65181b75
      Xiangrui Meng authored
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #4442 from mengxr/java6-fix and squashes the following commits:
      
      2098500 [Xiangrui Meng] fix a compilation error with java 6
      65181b75
    • GenTang's avatar
      [SPARK-4983] Insert waiting time before tagging EC2 instances · 0f3a3607
      GenTang authored
      The boto API doesn't support tag EC2 instances in the same call that launches them.
      We add a five-second wait so EC2 has enough time to propagate the information so that
      the tagging can succeed.
      
      Author: GenTang <gen.tang86@gmail.com>
      Author: Gen TANG <gen.tang86@gmail.com>
      
      Closes #3986 from GenTang/spark-4983 and squashes the following commits:
      
      13e257d [Gen TANG] modification of comments
      47f06755 [GenTang] print the information
      ab7a931 [GenTang] solve the issus spark-4983 by inserting waiting time
      3179737 [GenTang] Revert "handling exceptions about adding tags to ec2"
      6a8b53b [GenTang] Revert "the improvement of exception handling"
      13e97a6 [GenTang] Revert "typo"
      63fd360 [GenTang] typo
      692fc2b [GenTang] the improvement of exception handling
      6adcf6d [GenTang] handling exceptions about adding tags to ec2
      0f3a3607
    • OopsOutOfMemory's avatar
      [SPARK-5586][Spark Shell][SQL] Make `sqlContext` available in spark shell · 3d3ecd77
      OopsOutOfMemory authored
      Result is like this
      ```
      15/02/05 13:41:22 INFO SparkILoop: Created spark context..
      Spark context available as sc.
      15/02/05 13:41:22 INFO SparkILoop: Created sql context..
      SQLContext available as sqlContext.
      
      scala> sq
      sql          sqlContext   sqlParser    sqrt
      ```
      
      Author: OopsOutOfMemory <victorshengli@126.com>
      
      Closes #4387 from OopsOutOfMemory/sqlContextInShell and squashes the following commits:
      
      c7f5203 [OopsOutOfMemory] auto-import sql() function
      e160697 [OopsOutOfMemory] Merge branch 'sqlContextInShell' of https://github.com/OopsOutOfMemory/spark into sqlContextInShell
      37c0a16 [OopsOutOfMemory] auto detect hive support
      a9c59d9 [OopsOutOfMemory] rename and reduce range of imports
      6b9e309 [OopsOutOfMemory] Merge branch 'master' into sqlContextInShell
      cae652f [OopsOutOfMemory] make sqlContext available in spark shell
      3d3ecd77
    • Wenchen Fan's avatar
      [SPARK-5278][SQL] Introduce UnresolvedGetField and complete the check of... · 4793c840
      Wenchen Fan authored
      [SPARK-5278][SQL] Introduce UnresolvedGetField and complete the check of ambiguous reference to fields
      
      When the `GetField` chain(`a.b.c.d.....`) is interrupted by `GetItem` like `a.b[0].c.d....`, then the check of ambiguous reference to fields is broken.
      The reason is that: for something like `a.b[0].c.d`, we first parse it to `GetField(GetField(GetItem(Unresolved("a.b"), 0), "c"), "d")`. Then in `LogicalPlan#resolve`, we resolve `"a.b"` and build a `GetField` chain from bottom(the relation). But for the 2 outer `GetFiled`, we have to resolve them in `Analyzer` or do it in `GetField` lazily, check data type of child, search needed field, etc. which is similar to what we have done in `LogicalPlan#resolve`.
      So in this PR, the fix is just copy the same logic in `LogicalPlan#resolve` to `Analyzer`, which is simple and quick, but I do suggest introduce `UnresolvedGetFiled` like I explained in https://github.com/apache/spark/pull/2405.
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #4068 from cloud-fan/simple and squashes the following commits:
      
      a6857b5 [Wenchen Fan] fix import order
      8411c40 [Wenchen Fan] use UnresolvedGetField
      4793c840
    • wangfei's avatar
      [SQL][Minor] Remove cache keyword in SqlParser · bc363560
      wangfei authored
      Since cache keyword already defined in `SparkSQLParser` and `SqlParser` of catalyst is a more general parser which should not cover keywords related to underlying compute engine, to remove  cache keyword in  `SqlParser`.
      
      Author: wangfei <wangfei1@huawei.com>
      
      Closes #4393 from scwf/remove-cache-keyword and squashes the following commits:
      
      10ade16 [wangfei] remove cache keyword in sql parser
      bc363560
    • OopsOutOfMemory's avatar
      [SQL][HiveConsole][DOC] HiveConsole `correct hiveconsole imports` · b62c3524
      OopsOutOfMemory authored
      Sorry for that PR #4330 has some mistakes.
      
      I correct it....  so it works correctly now.
      
      Author: OopsOutOfMemory <victorshengli@126.com>
      
      Closes #4389 from OopsOutOfMemory/doc and squashes the following commits:
      
      843eed9 [OopsOutOfMemory] correct hiveconsole imports
      b62c3524
    • Yin Huai's avatar
      [SPARK-5595][SPARK-5603][SQL] Add a rule to do PreInsert type casting and... · 3eccf29c
      Yin Huai authored
      [SPARK-5595][SPARK-5603][SQL] Add a rule to do PreInsert type casting and field renaming and invalidating in memory cache after INSERT
      
      This PR adds a rule to Analyzer that will add preinsert data type casting and field renaming to the select clause in an `INSERT INTO/OVERWRITE` statement. Also, with the change of this PR, we always invalidate our in memory data cache after inserting into a BaseRelation.
      
      cc marmbrus liancheng
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #4373 from yhuai/insertFollowUp and squashes the following commits:
      
      08237a7 [Yin Huai] Merge remote-tracking branch 'upstream/master' into insertFollowUp
      316542e [Yin Huai] Doc update.
      c9ccfeb [Yin Huai] Revert a unnecessary change.
      84aecc4 [Yin Huai] Address comments.
      1951fe1 [Yin Huai] Merge remote-tracking branch 'upstream/master'
      c18da34 [Yin Huai] Invalidate cache after insert.
      727f21a [Yin Huai] Preinsert casting and renaming.
      3eccf29c
    • OopsOutOfMemory's avatar
      [SPARK-5324][SQL] Results of describe can't be queried · 0b7eb3f3
      OopsOutOfMemory authored
      Make below code works.
      ```
      sql("DESCRIBE test").registerTempTable("describeTest")
      sql("SELECT * FROM describeTest").collect()
      ```
      
      Author: OopsOutOfMemory <victorshengli@126.com>
      Author: Sheng, Li <OopsOutOfMemory@users.noreply.github.com>
      
      Closes #4249 from OopsOutOfMemory/desc_query and squashes the following commits:
      
      6fee13d [OopsOutOfMemory] up-to-date
      e71430a [Sheng, Li] Update HiveOperatorQueryableSuite.scala
      3ba1058 [OopsOutOfMemory] change to default argument
      aac7226 [OopsOutOfMemory] Merge branch 'master' into desc_query
      68eb6dd [OopsOutOfMemory] Merge branch 'desc_query' of github.com:OopsOutOfMemory/spark into desc_query
      354ad71 [OopsOutOfMemory] query describe command
      d541a35 [OopsOutOfMemory] refine test suite
      e1da481 [OopsOutOfMemory] refine test suite
      a780539 [OopsOutOfMemory] Merge branch 'desc_query' of github.com:OopsOutOfMemory/spark into desc_query
      0015f82 [OopsOutOfMemory] code style
      dd0aaef [OopsOutOfMemory] code style
      c7d606d [OopsOutOfMemory] rename test suite
      75f2342 [OopsOutOfMemory] refine code and test suite
      f942c9b [OopsOutOfMemory] initial
      11559ae [OopsOutOfMemory] code style
      c5fdecf [OopsOutOfMemory] code style
      aeaea5f [OopsOutOfMemory] rename test suite
      ac2c3bb [OopsOutOfMemory] refine code and test suite
      544573e [OopsOutOfMemory] initial
      0b7eb3f3
    • q00251598's avatar
      [SPARK-5619][SQL] Support 'show roles' in HiveContext · a958d609
      q00251598 authored
      Author: q00251598 <qiyadong@huawei.com>
      
      Closes #4397 from watermen/SPARK-5619 and squashes the following commits:
      
      f819b6c [q00251598] Support show roles in HiveContext.
      a958d609
Loading