Skip to content
Snippets Groups Projects
  1. Aug 27, 2014
    • Patrick Wendell's avatar
      8712653f
    • Michael Armbrust's avatar
      [SPARK-3235][SQL] Ensure in-memory tables don't always broadcast. · 7d2a7a91
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2147 from marmbrus/inMemDefaultSize and squashes the following commits:
      
      5390360 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into inMemDefaultSize
      14204d3 [Michael Armbrust] Set the context before creating SparkLogicalPlans.
      8da4414 [Michael Armbrust] Make sure we throw errors when leaf nodes fail to provide statistcs
      18ce029 [Michael Armbrust] Ensure in-memory tables don't always broadcast.
      7d2a7a91
    • luogankun's avatar
      [SPARK-3065][SQL] Add locale setting to fix results do not match for... · 65253502
      luogankun authored
      [SPARK-3065][SQL] Add locale setting to fix results do not match for udf_unix_timestamp format "yyyy MMM dd h:mm:ss a" run with not "America/Los_Angeles" TimeZone in HiveCompatibilitySuite
      
      When run the udf_unix_timestamp of org.apache.spark.sql.hive.execution.HiveCompatibilitySuite testcase
      with not "America/Los_Angeles" TimeZone throws error. [https://issues.apache.org/jira/browse/SPARK-3065]
      add locale setting on beforeAll and afterAll method to fix the bug of HiveCompatibilitySuite testcase
      
      Author: luogankun <luogankun@gmail.com>
      
      Closes #1968 from luogankun/SPARK-3065 and squashes the following commits:
      
      c167832 [luogankun] [SPARK-3065][SQL] Add Locale setting to HiveCompatibilitySuite
      0a25e3a [luogankun] [SPARK-3065][SQL] Add Locale setting to HiveCompatibilitySuite
      65253502
    • Aaron Davidson's avatar
      [SQL] [SPARK-3236] Reading Parquet tables from Metastore mangles location · cc275f4b
      Aaron Davidson authored
      Currently we do `relation.hiveQlTable.getDataLocation.getPath`, which returns the path-part of the URI (e.g., "s3n://my-bucket/my-path" => "/my-path"). We should do `relation.hiveQlTable.getDataLocation.toString` instead, as a URI's toString returns a faithful representation of the full URI, which can later be passed into a Hadoop Path.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #2150 from aarondav/parquet-location and squashes the following commits:
      
      459f72c [Aaron Davidson] [SQL] [SPARK-3236] Reading Parquet tables from Metastore mangles location
      cc275f4b
    • viirya's avatar
      [SPARK-3252][SQL] Add missing condition for test · 28d41d62
      viirya authored
      According to the text message, both relations should be tested. So add the missing condition.
      
      Author: viirya <viirya@gmail.com>
      
      Closes #2159 from viirya/fix_test and squashes the following commits:
      
      b1c0f52 [viirya] add missing condition.
      28d41d62
    • Andrew Or's avatar
      [SPARK-3243] Don't use stale spark-driver.* system properties · 63a053ab
      Andrew Or authored
      If we set both `spark.driver.extraClassPath` and `--driver-class-path`, then the latter correctly overrides the former. However, the value of the system property `spark.driver.extraClassPath` still uses the former, which is actually not added to the class path. This may cause some confusion...
      
      Of course, this also affects other options (i.e. java options, library path, memory...).
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2154 from andrewor14/driver-submit-configs-fix and squashes the following commits:
      
      17ec6fc [Andrew Or] Fix tests
      0140836 [Andrew Or] Don't forget spark.driver.memory
      e39d20f [Andrew Or] Also set spark.driver.extra* configs in client mode
      63a053ab
    • Vida Ha's avatar
      Spark-3213 Fixes issue with spark-ec2 not detecting slaves created with "Launch More like this" · 7faf755a
      Vida Ha authored
      ... copy the spark_cluster_tag from a spot instance requests over to the instances.
      
      Author: Vida Ha <vida@databricks.com>
      
      Closes #2163 from vidaha/vida/spark-3213 and squashes the following commits:
      
      5070a70 [Vida Ha] Spark-3214 Fix issue with spark-ec2 not detecting slaves created with 'Launch More Like This' and using Spot Requests
      7faf755a
    • Davies Liu's avatar
      [SPARK-2871] [PySpark] add RDD.lookup(key) · 4fa2fda8
      Davies Liu authored
      RDD.lookup(key)
      
              Return the list of values in the RDD for key `key`. This operation
              is done efficiently if the RDD has a known partitioner by only
              searching the partition that the key maps to.
      
              >>> l = range(1000)
              >>> rdd = sc.parallelize(zip(l, l), 10)
              >>> rdd.lookup(42)  # slow
              [42]
              >>> sorted = rdd.sortByKey()
              >>> sorted.lookup(42)  # fast
              [42]
      
      It also clean up the code in RDD.py, and fix several bugs (related to preservesPartitioning).
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2093 from davies/lookup and squashes the following commits:
      
      1789cd4 [Davies Liu] `f` in foreach could be generator or not.
      2871b80 [Davies Liu] Merge branch 'master' into lookup
      c6390ea [Davies Liu] address all comments
      0f1bce8 [Davies Liu] add test case for lookup()
      be0e8ba [Davies Liu] fix preservesPartitioning
      eb1305d [Davies Liu] add RDD.lookup(key)
      4fa2fda8
    • chutium's avatar
      [SPARK-3138][SQL] sqlContext.parquetFile should be able to take a single file as parameter · 48f42781
      chutium authored
      ```if (!fs.getFileStatus(path).isDir) throw Exception``` make no sense after this commit #1370
      
      be careful if someone is working on SPARK-2551, make sure the new change passes test case ```test("Read a parquet file instead of a directory")```
      
      Author: chutium <teng.qiu@gmail.com>
      
      Closes #2044 from chutium/parquet-singlefile and squashes the following commits:
      
      4ae477f [chutium] [SPARK-3138][SQL] sqlContext.parquetFile should be able to take a single file as parameter
      48f42781
    • Chip Senkbeil's avatar
      [SPARK-3256] Added support for :cp <jar> that was broken in Scala 2.10.x for REPL · 191d7cf2
      Chip Senkbeil authored
      As seen with [SI-6502](https://issues.scala-lang.org/browse/SI-6502) of Scala, the _:cp_ command was broken in Scala 2.10.x. As the Spark shell is a friendly wrapper on top of the Scala REPL, it is also affected by this problem.
      
      My solution was to alter the internal classpath and invalidate any new entries. I also had to add the ability to add new entries to the parent classloader of the interpreter (SparkIMain's global).
      
      The advantage of this versus wiping the interpreter and replaying all of the commands is that you don't have to worry about rerunning heavy Spark-related commands (going to the cluster) or potentially reloading data that might have changed. Instead, you get to work from where you left off.
      
      Until this is fixed upstream for 2.10.x, I had to use reflection to alter the internal compiler classpath.
      
      The solution now looks like this:
      ![screen shot 2014-08-13 at 3 46 02 pm](https://cloud.githubusercontent.com/assets/2481802/3912625/f02b1440-232c-11e4-9bf6-bafb3e352d14.png)
      
      Author: Chip Senkbeil <rcsenkbe@us.ibm.com>
      
      Closes #1929 from rcsenkbeil/FixReplClasspathSupport and squashes the following commits:
      
      f420cbf [Chip Senkbeil] Added SparkContext.addJar calls to support executing code on remote clusters
      a826795 [Chip Senkbeil] Updated AddUrlsToClasspath to use 'new Run' suggestion over hackish compiler error
      2ff1d86 [Chip Senkbeil] Added compilation failure on symbols hack to get Scala classes to load correctly
      a220639 [Chip Senkbeil] Added support for :cp <jar> that was broken in Scala 2.10.x for REPL
      191d7cf2
    • Cheng Hao's avatar
      [SPARK-3197] [SQL] Reduce the Expression tree object creations for aggregation function (min/max) · 4238c17d
      Cheng Hao authored
      Aggregation function min/max in catalyst will create expression tree for each single row, however, the expression tree creation is quite expensive in a multithreading env currently. Hence we got a very bad performance for the min/max.
      Here is the benchmark that I've done in my local.
      
      Master | Previous Result (ms) | Current Result (ms)
      ------------ | ------------- | -------------
      local | 3645 | 3416
      local[6] | 3602 | 1002
      
      The Benchmark source code.
      ```
      case class Record(key: Int, value: Int)
      
      object TestHive2 extends HiveContext(new SparkContext("local[6]", "TestSQLContext", new SparkConf()))
      
      object DataPrepare extends App {
        import TestHive2._
      
        val rdd = sparkContext.parallelize((1 to 10000000).map(i => Record(i % 3000, i)), 12)
      
        runSqlHive("SHOW TABLES")
        runSqlHive("DROP TABLE if exists a")
        runSqlHive("DROP TABLE if exists result")
        rdd.registerAsTable("records")
      
        runSqlHive("""CREATE TABLE a (key INT, value INT)
                       | ROW FORMAT SERDE
                       | 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
                       | STORED AS RCFILE
                     """.stripMargin)
        runSqlHive("""CREATE TABLE result (key INT, value INT)
                       | ROW FORMAT SERDE
                       | 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
                       | STORED AS RCFILE
                     """.stripMargin)
      
        hql(s"""from records
                   | insert into table a
                   | select key, value
                 """.stripMargin)
      }
      
      object PerformanceTest extends App {
        import TestHive2._
      
        hql("SHOW TABLES")
        hql("set spark.sql.shuffle.partitions=12")
      
        val cmd = "select min(value), max(value) from a group by key"
      
        val results = ("Result1", benchmark(cmd)) ::
                      ("Result2", benchmark(cmd)) ::
                      ("Result3", benchmark(cmd)) :: Nil
        results.foreach { case (prompt, result) => {
            println(s"$prompt: took ${result._1} ms (${result._2} records)")
          }
        }
      
        def benchmark(cmd: String) = {
          val begin = System.currentTimeMillis()
          val count = hql(cmd).count
          val end = System.currentTimeMillis()
          ((end - begin), count)
        }
      }
      ```
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #2113 from chenghao-intel/aggregation_expression_optimization and squashes the following commits:
      
      db40395 [Cheng Hao] remove the transient and add val for the expression property
      d56167d [Cheng Hao] Reduce the Expressions creation
      4238c17d
    • u0jing's avatar
      [SPARK-3118][SQL]add "SHOW TBLPROPERTIES tblname;" and "SHOW COLUMNS (FROM|IN)... · 3b5eb708
      u0jing authored
      [SPARK-3118][SQL]add "SHOW TBLPROPERTIES tblname;" and "SHOW COLUMNS (FROM|IN) table_name [(FROM|IN) db_name]" support
      
      JIRA issue: [SPARK-3118] https://issues.apache.org/jira/browse/SPARK-3118
      
      eg:
      > SHOW TBLPROPERTIES test;
      SHOW TBLPROPERTIES test;
      numPartitions	0
      numFiles	1
      transient_lastDdlTime	1407923642
      numRows	0
      totalSize	82
      rawDataSize	0
      
      eg:
      > SHOW COLUMNS  in test;
      SHOW COLUMNS  in test;
      OK
      Time taken: 0.304 seconds
      id
      stid
      bo
      
      Author: u0jing <u9jing@gmail.com>
      
      Closes #2034 from u0jing/spark-3118 and squashes the following commits:
      
      b231d87 [u0jing] add golden answer files
      35f4885 [u0jing] add 'show columns' and 'show tblproperties' support
      3b5eb708
    • Allan Douglas R. de Oliveira's avatar
      SPARK-3259 - User data should be given to the master · 5ac4093c
      Allan Douglas R. de Oliveira authored
      Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>
      
      Closes #2162 from douglaz/user_data_master and squashes the following commits:
      
      10d15f6 [Allan Douglas R. de Oliveira] Give user data also to the master
      5ac4093c
    • uncleGen's avatar
      [SPARK-3170][CORE][BUG]:RDD info loss in "StorageTab" and "ExecutorTab" · d8298c46
      uncleGen authored
      compeleted stage only need to remove its own partitions that are no longer cached. However, "StorageTab" may lost some rdds which are cached actually. Not only in "StorageTab", "ExectutorTab" may also lose some rdd info which have been overwritten by last rdd in a same task.
      1. "StorageTab": when multiple stages run simultaneously, completed stage will remove rdd info which belong to other stages that are still running.
      2. "ExectutorTab": taskcontext may lose some "updatedBlocks" info of  rdds  in a dependency chain. Like the following example:
               val r1 = sc.paralize(..).cache()
               val r2 = r1.map(...).cache()
               val n = r2.count()
      
      When count the r2, r1 and r2 will be cached finally. So in CacheManager.getOrCompute, the taskcontext should contain "updatedBlocks" of r1 and r2. Currently, the "updatedBlocks" only contain the info of r2.
      
      Author: uncleGen <hustyugm@gmail.com>
      
      Closes #2131 from uncleGen/master_ui_fix and squashes the following commits:
      
      a6a8a0b [uncleGen] fix some coding style
      3a1bc15 [uncleGen] fix some error in unit test
      56ea488 [uncleGen] there's some line too long
      c82ba82 [uncleGen] Bug Fix: RDD info loss in "StorageTab" and "ExecutorTab"
      d8298c46
    • Marcelo Vanzin's avatar
      [SPARK-2933] [yarn] Refactor and cleanup Yarn AM code. · b92d823a
      Marcelo Vanzin authored
      This change modifies the Yarn module so that all the logic related
      to running the ApplicationMaster is localized. Instead of, previously,
      4 different classes with mostly identical code, now we have:
      
      - A single, shared ApplicationMaster class, which can operate both in
        client and cluster mode, and substitutes the old ApplicationMaster
        (for cluster mode) and ExecutorLauncher (for client mode).
      
      The benefit here is that all different execution modes for all supported
      yarn versions use the same shared code for monitoring executor allocation,
      setting up configuration, and monitoring the process's lifecycle.
      
      - A new YarnRMClient interface, which defines basic RM functionality needed
        by the ApplicationMaster. This interface has concrete implementations for
        each supported Yarn version.
      
      - A new YarnAllocator interface, which just abstracts the existing interface
        of the YarnAllocationHandler class. This is to avoid having to touch the
        allocator code too much in this change, although it might benefit from a
        similar effort in the future.
      
      The end result is much easier to understand code, with much less duplication,
      making it much easier to fix bugs, add features, and test everything knowing
      that all supported versions will behave the same.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #2020 from vanzin/SPARK-2933 and squashes the following commits:
      
      3bbf3e7 [Marcelo Vanzin] Merge branch 'master' into SPARK-2933
      ff389ed [Marcelo Vanzin] Do not interrupt reporter thread from within itself.
      3a8ed37 [Marcelo Vanzin] Remote stale comment.
      0f5142c [Marcelo Vanzin] Review feedback.
      41f8c8a [Marcelo Vanzin] Fix app status reporting.
      c0794be [Marcelo Vanzin] Correctly clean up staging directory.
      92770cc [Marcelo Vanzin] Merge branch 'master' into SPARK-2933
      ecaf332 [Marcelo Vanzin] Small fix to shutdown code.
      f02d3f8 [Marcelo Vanzin] Merge branch 'master' into SPARK-2933
      f581122 [Marcelo Vanzin] Review feedback.
      557fdeb [Marcelo Vanzin] Cleanup a couple more constants.
      be6068d [Marcelo Vanzin] Restore shutdown hook to clean up staging dir.
      5150993 [Marcelo Vanzin] Some more cleanup.
      b6289ab [Marcelo Vanzin] Move cluster/client code to separate methods.
      ecb23cd [Marcelo Vanzin] More trivial cleanup.
      34f1e63 [Marcelo Vanzin] Fix some questionable error handling.
      5657c7d [Marcelo Vanzin] Finish app if SparkContext initialization times out.
      0e4be3d [Marcelo Vanzin] Keep "ExecutorLauncher" as the main class for client-mode AM.
      91beabb [Marcelo Vanzin] Fix UI filter registration.
      8c72239 [Marcelo Vanzin] Trivial cleanups.
      99a52d5 [Marcelo Vanzin] Changes to the yarn-alpha project to use common AM code.
      848ca6d [Marcelo Vanzin] [SPARK-2933] [yarn] Refactor and cleanup Yarn AM code.
      b92d823a
    • Hari Shreedharan's avatar
      [SPARK-3154][STREAMING] Make FlumePollingInputDStream shutdown cleaner. · 6f671d04
      Hari Shreedharan authored
      Currently lot of errors get thrown from Avro IPC layer when the dstream
      or sink is shutdown. This PR cleans it up. Some refactoring is done in the
      receiver code to put all of the RPC code into a single Try and just recover
      from that. The sink code has also been cleaned up.
      
      Author: Hari Shreedharan <hshreedharan@apache.org>
      
      Closes #2065 from harishreedharan/clean-flume-shutdown and squashes the following commits:
      
      f93a07c [Hari Shreedharan] Formatting fixes.
      d7427cc [Hari Shreedharan] More fixes!
      a0a8852 [Hari Shreedharan] Fix race condition, hopefully! Minor other changes.
      4c9ed02 [Hari Shreedharan] Remove unneeded list in Callback handler. Other misc changes.
      8fee36f [Hari Shreedharan] Scala-library is required, else maven build fails. Also catch InterruptedException in TxnProcessor.
      445e700 [Hari Shreedharan] Merge remote-tracking branch 'asf/master' into clean-flume-shutdown
      87232e0 [Hari Shreedharan] Refactor Flume Input Stream. Clean up code, better error handling.
      9001d26 [Hari Shreedharan] Change log level to debug in TransactionProcessor#shutdown method
      e7b8d82 [Hari Shreedharan] Incorporate review feedback
      598efa7 [Hari Shreedharan] Clean up some exception handling code
      e1027c6 [Hari Shreedharan] Merge remote-tracking branch 'asf/master' into clean-flume-shutdown
      ed608c8 [Hari Shreedharan] [SPARK-3154][STREAMING] Make FlumePollingInputDStream shutdown cleaner.
      6f671d04
    • Joseph K. Bradley's avatar
      [SPARK-3227] [mllib] Added migration guide for v1.0 to v1.1 · 171a41cb
      Joseph K. Bradley authored
      The only updates are in DecisionTree.
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #2146 from jkbradley/mllib-migration and squashes the following commits:
      
      5a1f487 [Joseph K. Bradley] small edit to doc
      411d6d9 [Joseph K. Bradley] Added migration guide for v1.0 to v1.1.  The only updates are in DecisionTree.
      171a41cb
    • Xiangrui Meng's avatar
      [SPARK-2830][MLLIB] doc update for 1.1 · 43dfc84f
      Xiangrui Meng authored
      1. renamed mllib-basics to mllib-data-types
      1. renamed mllib-stats to mllib-statistics
      1. moved random data generation to the bottom of mllib-stats
      1. updated toc accordingly
      
      atalwalkar
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #2151 from mengxr/mllib-doc-1.1 and squashes the following commits:
      
      0bd79f3 [Xiangrui Meng] add mllib-data-types
      b64a5d7 [Xiangrui Meng] update the content list of basis statistics in mllib-guide
      f625cc2 [Xiangrui Meng] move mllib-basics to mllib-data-types
      4d69250 [Xiangrui Meng] move random data generation to the bottom of statistics
      e64f3ce [Xiangrui Meng] move mllib-stats.md to mllib-statistics.md
      43dfc84f
    • Michael Armbrust's avatar
      [SPARK-3237][SQL] Fix parquet filters with UDFs · e1139dd6
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2153 from marmbrus/parquetFilters and squashes the following commits:
      
      712731a [Michael Armbrust] Use closure serializer for sending filters.
      1e83f80 [Michael Armbrust] Clean udf functions.
      e1139dd6
    • Tathagata Das's avatar
      [SPARK-3139] Made ContextCleaner to not block on shuffles · 3e2864e4
      Tathagata Das authored
      As a workaround for SPARK-3015, the ContextCleaner was made "blocking", that is, it cleaned items one-by-one. But shuffles can take a long time to be deleted. Given that the RC for 1.1 is imminent, this PR makes a narrow change in the context cleaner - not wait for shuffle cleanups to complete. Also it changes the error messages on failure to delete to be milder warnings, as exceptions in the delete code path for one item does not really stop the actual functioning of the system.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #2143 from tdas/cleaner-shuffle-fix and squashes the following commits:
      
      9c84202 [Tathagata Das] Restoring default blocking behavior in ContextCleanerSuite, and added docs to identify that spark.cleaner.referenceTracking.blocking does not control shuffle.
      2181329 [Tathagata Das] Mark shuffle cleanup as non-blocking.
      e337cc2 [Tathagata Das] Changed semantics based on PR comments.
      387b578 [Tathagata Das] Made ContextCleaner to not block on shuffles
      3e2864e4
    • Patrick Wendell's avatar
      HOTFIX: Minor typo in conf template · 9d65f271
      Patrick Wendell authored
      9d65f271
    • Andrew Or's avatar
      [SPARK-3167] Handle special driver configs in Windows · 7557c4cf
      Andrew Or authored
      This is an effort to bring the Windows scripts up to speed after recent splashing changes in #1845.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2129 from andrewor14/windows-config and squashes the following commits:
      
      881a8f0 [Andrew Or] Add reference to Windows taskkill
      92e6047 [Andrew Or] Update a few comments (minor)
      22b1acd [Andrew Or] Fix style again (minor)
      afcffea [Andrew Or] Fix style (minor)
      72004c2 [Andrew Or] Actually respect --driver-java-options
      803218b [Andrew Or] Actually respect SPARK_*_CLASSPATH
      eeb34a0 [Andrew Or] Update outdated comment (minor)
      35caecc [Andrew Or] In Windows, actually kill Java processes on exit
      f97daa2 [Andrew Or] Fix Windows spark shell stdin issue
      83ebe60 [Andrew Or] Parse special driver configs in Windows (broken)
      7557c4cf
    • Reynold Xin's avatar
      [SPARK-3224] FetchFailed reduce stages should only show up once in failed stages (in UI) · bf719056
      Reynold Xin authored
      This is a HOTFIX for 1.1.
      
      Author: Reynold Xin <rxin@apache.org>
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #2127 from rxin/SPARK-3224 and squashes the following commits:
      
      effb1ce [Reynold Xin] Move log message.
      49282b3 [Reynold Xin] Kay's feedback.
      3f01847 [Reynold Xin] Merge pull request #2 from kayousterhout/SPARK-3224
      796d282 [Kay Ousterhout] Added unit test for SPARK-3224
      3d3d356 [Reynold Xin] Remove map output loc even for repeated FetchFaileds.
      1dd3eb5 [Reynold Xin] [SPARK-3224] FetchFailed reduce stages should only show up once in the failed stages UI.
      bf719056
  2. Aug 26, 2014
    • Matei Zaharia's avatar
      Manually close old pull requests · e70aff6c
      Matei Zaharia authored
      Closes #671, Closes #515
      e70aff6c
    • Matei Zaharia's avatar
      Manually close some old pull requests · ee91eb8c
      Matei Zaharia authored
      Closes #530, Closes #223, Closes #738, Closes #546
      ee91eb8c
    • Josh Rosen's avatar
      Fix unclosed HTML tag in Yarn docs. · d8345471
      Josh Rosen authored
      d8345471
    • Martin Weindel's avatar
      [SPARK-3240] Adding known issue for MESOS-1688 · be043e3f
      Martin Weindel authored
      When using Mesos with the fine-grained mode, a Spark job can run into a dead lock on low allocatable memory on Mesos slaves. As a work-around 32 MB (= Mesos MIN_MEM) are allocated for each task, to ensure Mesos making new offers after task completion.
      From my perspective, it would be better to fix this problem in Mesos by dropping the constraint on memory for offers, but as temporary solution this patch helps to avoid the dead lock on current Mesos versions.
      See [[MESOS-1688] No offers if no memory is allocatable](https://issues.apache.org/jira/browse/MESOS-1688) for details for this problem.
      
      Author: Martin Weindel <martin.weindel@gmail.com>
      
      Closes #1860 from MartinWeindel/master and squashes the following commits:
      
      5762030 [Martin Weindel] reverting work-around
      a6bf837 [Martin Weindel] added known issue for issue MESOS-1688
      d9d2ca6 [Martin Weindel] work around for problem with Mesos offering semantic (see [https://issues.apache.org/jira/browse/MESOS-1688])
      be043e3f
    • Takuya UESHIN's avatar
      [SPARK-3036][SPARK-3037][SQL] Add MapType/ArrayType containing null value support to Parquet. · 727cb25b
      Takuya UESHIN authored
      JIRA:
      - https://issues.apache.org/jira/browse/SPARK-3036
      - https://issues.apache.org/jira/browse/SPARK-3037
      
      Currently this uses the following Parquet schema for `MapType` when `valueContainsNull` is `true`:
      
      ```
      message root {
        optional group a (MAP) {
          repeated group map (MAP_KEY_VALUE) {
            required int32 key;
            optional int32 value;
          }
        }
      }
      ```
      
      for `ArrayType` when `containsNull` is `true`:
      
      ```
      message root {
        optional group a (LIST) {
          repeated group bag {
            optional int32 array;
          }
        }
      }
      ```
      
      We have to think about compatibilities with older version of Spark or Hive or others I mentioned in the JIRA issues.
      
      Notice:
      This PR is based on #1963 and #1889.
      Please check them first.
      
      /cc marmbrus, yhuai
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #2032 from ueshin/issues/SPARK-3036_3037 and squashes the following commits:
      
      4e8e9e7 [Takuya UESHIN] Add ArrayType containing null value support to Parquet.
      013c2ca [Takuya UESHIN] Add MapType containing null value support to Parquet.
      62989de [Takuya UESHIN] Merge branch 'issues/SPARK-2969' into issues/SPARK-3036_3037
      8e38b53 [Takuya UESHIN] Merge branch 'issues/SPARK-3063' into issues/SPARK-3036_3037
      727cb25b
    • nchammas's avatar
      [Docs] Run tests like in contributing guide · 73b3089b
      nchammas authored
      The Contributing to Spark guide [recommends](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-AutomatedTesting) running tests by calling `./dev/run-tests`. The README should, too.
      
      `./sbt/sbt test` does not cover Python tests or style tests.
      
      Author: nchammas <nicholas.chammas@gmail.com>
      
      Closes #2149 from nchammas/patch-2 and squashes the following commits:
      
      2b3b132 [nchammas] [Docs] Run tests like in contributing guide
      73b3089b
    • Cheng Lian's avatar
      [SPARK-2964] [SQL] Remove duplicated code from spark-sql and start-thriftserver.sh · faeb9c0e
      Cheng Lian authored
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #1886 from sarutak/SPARK-2964 and squashes the following commits:
      
      8ef8751 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2964
      26e7c95 [Kousuke Saruta] Revert "Shorten timeout to more reasonable value"
      ffb68fa [Kousuke Saruta] Modified spark-sql and start-thriftserver.sh to use bin/utils.sh
      8c6f658 [Kousuke Saruta] Merge branch 'spark-3026' of https://github.com/liancheng/spark into SPARK-2964
      81b43a8 [Cheng Lian] Shorten timeout to more reasonable value
      a89e66d [Cheng Lian] Fixed command line options quotation in scripts
      9c894d3 [Cheng Lian] Fixed bin/spark-sql -S option typo
      be4736b [Cheng Lian] Report better error message when running JDBC/CLI without hive-thriftserver profile enabled
      faeb9c0e
    • WangTao's avatar
      [SPARK-3225]Typo in script · 2ffd3290
      WangTao authored
      use_conf_dir => user_conf_dir in load-spark-env.sh.
      
      Author: WangTao <barneystinson@aliyun.com>
      
      Closes #1926 from WangTaoTheTonic/TypoInScript and squashes the following commits:
      
      0c104ad [WangTao] Typo in script
      2ffd3290
    • Davies Liu's avatar
      [SPARK-3073] [PySpark] use external sort in sortBy() and sortByKey() · f1e71d4c
      Davies Liu authored
      Using external sort to support sort large datasets in reduce stage.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1978 from davies/sort and squashes the following commits:
      
      bbcd9ba [Davies Liu] check spilled bytes in tests
      b125d2f [Davies Liu] add test for external sort in rdd
      eae0176 [Davies Liu] choose different disks from different processes and instances
      1f075ed [Davies Liu] Merge branch 'master' into sort
      eb53ca6 [Davies Liu] Merge branch 'master' into sort
      644abaf [Davies Liu] add license in LICENSE
      19f7873 [Davies Liu] improve tests
      55602ee [Davies Liu] use external sort in sortBy() and sortByKey()
      f1e71d4c
    • Michael Armbrust's avatar
      [SPARK-3194][SQL] Add AttributeSet to fix bugs with invalid comparisons of AttributeReferences · c4787a36
      Michael Armbrust authored
      It is common to want to describe sets of attributes that are in various parts of a query plan.  However, the semantics of putting `AttributeReference` objects into a standard Scala `Set` result in subtle bugs when references differ cosmetically.  For example, with case insensitive resolution it is possible to have two references to the same attribute whose names are not equal.
      
      In this PR I introduce a new abstraction, an `AttributeSet`, which performs all comparisons using the globally unique `ExpressionId` instead of case class equality.  (There is already a related class, [`AttributeMap`](https://github.com/marmbrus/spark/blob/inMemStats/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/AttributeMap.scala#L32))  This new type of set is used to fix a bug in the optimizer where needed attributes were getting projected away underneath join operators.
      
      I also took this opportunity to refactor the expression and query plan base classes.  In all but one instance the logic for computing the `references` of an `Expression` were the same.  Thus, I moved this logic into the base class.
      
      For query plans the semantics of  the `references` method were ill defined (is it the references output? or is it those used by expression evaluation? or what?).  As a result, this method wasn't really used very much.  So, I removed it.
      
      TODO:
       - [x] Finish scala doc for `AttributeSet`
       - [x] Scan the code for other instances of `Set[Attribute]` and refactor them.
       - [x] Finish removing `references` from `QueryPlan`
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2109 from marmbrus/attributeSets and squashes the following commits:
      
      1c0dae5 [Michael Armbrust] work on serialization bug.
      9ba868d [Michael Armbrust] Merge remote-tracking branch 'origin/master' into attributeSets
      3ae5288 [Michael Armbrust] review comments
      40ce7f6 [Michael Armbrust] style
      d577cc7 [Michael Armbrust] Scaladoc
      cae5d22 [Michael Armbrust] remove more references implementations
      d6e16be [Michael Armbrust] Remove more instances of "def references" and normal sets of attributes.
      fc26b49 [Michael Armbrust] Add AttributeSet class, remove references from Expression.
      c4787a36
    • Burak's avatar
      [SPARK-2839][MLlib] Stats Toolkit documentation updated · 1208f72a
      Burak authored
      Documentation updated for the Statistics Toolkit of MLlib. mengxr atalwalkar
      
      https://issues.apache.org/jira/browse/SPARK-2839
      
      P.S. Accidentally closed #2123. New commits didn't show up after I reopened the PR. I've opened this instead and closed the old one.
      
      Author: Burak <brkyvz@gmail.com>
      
      Closes #2130 from brkyvz/StatsLib-Docs and squashes the following commits:
      
      a54a855 [Burak] [SPARK-2839][MLlib] Addressed comments
      bfc6896 [Burak] [SPARK-2839][MLlib] Added a more specific link to colStats() for pyspark
      213fe3f [Burak] [SPARK-2839][MLlib] Modifications made according to review
      fec4d9d [Burak] [SPARK-2830][MLlib] Stats Toolkit documentation updated
      1208f72a
    • Xiangrui Meng's avatar
      [SPARK-3226][MLLIB] doc update for native libraries · adbd5c16
      Xiangrui Meng authored
      to mention `-Pnetlib-lgpl` option. atalwalkar
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #2128 from mengxr/mllib-native and squashes the following commits:
      
      4cbba57 [Xiangrui Meng] update mllib dependencies
      adbd5c16
    • Takuya UESHIN's avatar
      [SPARK-3063][SQL] ExistingRdd should convert Map to catalyst Map. · 6b5584ef
      Takuya UESHIN authored
      Currently `ExistingRdd.convertToCatalyst` doesn't convert `Map` value.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #1963 from ueshin/issues/SPARK-3063 and squashes the following commits:
      
      3ba41f2 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-3063
      4d7bae2 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-3063
      9321379 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-3063
      d8a900a [Takuya UESHIN] Make ExistingRdd.convertToCatalyst be able to convert Map value.
      6b5584ef
    • Takuya UESHIN's avatar
      [SPARK-2969][SQL] Make ScalaReflection be able to handle... · 98c2bb0b
      Takuya UESHIN authored
      [SPARK-2969][SQL] Make ScalaReflection be able to handle ArrayType.containsNull and MapType.valueContainsNull.
      
      Make `ScalaReflection` be able to handle like:
      
      - `Seq[Int]` as `ArrayType(IntegerType, containsNull = false)`
      - `Seq[java.lang.Integer]` as `ArrayType(IntegerType, containsNull = true)`
      - `Map[Int, Long]` as `MapType(IntegerType, LongType, valueContainsNull = false)`
      - `Map[Int, java.lang.Long]` as `MapType(IntegerType, LongType, valueContainsNull = true)`
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #1889 from ueshin/issues/SPARK-2969 and squashes the following commits:
      
      24f1c5c [Takuya UESHIN] Change the default value of ArrayType.containsNull to true in Python API.
      79f5b65 [Takuya UESHIN] Change the default value of ArrayType.containsNull to true in Java API.
      7cd1a7a [Takuya UESHIN] Fix json test failures.
      2cfb862 [Takuya UESHIN] Change the default value of ArrayType.containsNull to true.
      2f38e61 [Takuya UESHIN] Revert the default value of MapTypes.valueContainsNull.
      9fa02f5 [Takuya UESHIN] Fix a test failure.
      1a9a96b [Takuya UESHIN] Modify ScalaReflection to handle ArrayType.containsNull and MapType.valueContainsNull.
      98c2bb0b
    • Davies Liu's avatar
      [SPARK-2871] [PySpark] add histgram() API · 3cedc4f4
      Davies Liu authored
      RDD.histogram(buckets)
      
              Compute a histogram using the provided buckets. The buckets
              are all open to the right except for the last which is closed.
              e.g. [1,10,20,50] means the buckets are [1,10) [10,20) [20,50],
              which means 1<=x<10, 10<=x<20, 20<=x<=50. And on the input of 1
              and 50 we would have a histogram of 1,0,1.
      
              If your histogram is evenly spaced (e.g. [0, 10, 20, 30]),
              this can be switched from an O(log n) inseration to O(1) per
              element(where n = # buckets).
      
              Buckets must be sorted and not contain any duplicates, must be
              at least two elements.
      
              If `buckets` is a number, it will generates buckets which is
              evenly spaced between the minimum and maximum of the RDD. For
              example, if the min value is 0 and the max is 100, given buckets
              as 2, the resulting buckets will be [0,50) [50,100]. buckets must
              be at least 1 If the RDD contains infinity, NaN throws an exception
              If the elements in RDD do not vary (max == min) always returns
              a single bucket.
      
              It will return an tuple of buckets and histogram.
      
              >>> rdd = sc.parallelize(range(51))
              >>> rdd.histogram(2)
              ([0, 25, 50], [25, 26])
              >>> rdd.histogram([0, 5, 25, 50])
              ([0, 5, 25, 50], [5, 20, 26])
              >>> rdd.histogram([0, 15, 30, 45, 60], True)
              ([0, 15, 30, 45, 60], [15, 15, 15, 6])
              >>> rdd = sc.parallelize(["ab", "ac", "b", "bd", "ef"])
              >>> rdd.histogram(("a", "b", "c"))
              (('a', 'b', 'c'), [2, 2])
      
      closes #122, it's duplicated.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2091 from davies/histgram and squashes the following commits:
      
      a322f8a [Davies Liu] fix deprecation of e.message
      84e85fa [Davies Liu] remove evenBuckets, add more tests (including str)
      d9a0722 [Davies Liu] address comments
      0e18a2d [Davies Liu] add histgram() API
      3cedc4f4
    • chutium's avatar
      [SPARK-3131][SQL] Allow user to set parquet compression codec for writing ParquetFile in SQLContext · 8856c3d8
      chutium authored
      There are 4 different compression codec available for ```ParquetOutputFormat```
      
      in Spark SQL, it was set as a hard-coded value in ```ParquetRelation.defaultCompression```
      
      original discuss:
      https://github.com/apache/spark/pull/195#discussion-diff-11002083
      
      i added a new config property in SQLConf to allow user to change this compression codec, and i used similar short names syntax as described in SPARK-2953 #1873 (https://github.com/apache/spark/pull/1873/files#diff-0)
      
      btw, which codec should we use as default? it was set to GZIP (https://github.com/apache/spark/pull/195/files#diff-4), but i think maybe we should change this to SNAPPY, since SNAPPY is already the default codec for shuffling in spark-core (SPARK-2469, #1415), and parquet-mr supports Snappy codec natively (https://github.com/Parquet/parquet-mr/commit/e440108de57199c12d66801ca93804086e7f7632).
      
      Author: chutium <teng.qiu@gmail.com>
      
      Closes #2039 from chutium/parquet-compression and squashes the following commits:
      
      2f44964 [chutium] [SPARK-3131][SQL] parquet compression default codec set to snappy, also in test suite
      e578e21 [chutium] [SPARK-3131][SQL] compression codec config property name and default codec set to snappy
      21235dc [chutium] [SPARK-3131][SQL] Allow user to set parquet compression codec for writing ParquetFile in SQLContext
      8856c3d8
    • Andrew Or's avatar
      [SPARK-2886] Use more specific actor system name than "spark" · b21ae5bb
      Andrew Or authored
      As of #1777 we log the name of the actor system when it binds to a port. The current name "spark" is super general and does not convey any meaning. For instance, the following line is taken from my driver log after setting `spark.driver.port` to 5001.
      ```
      14/08/13 19:33:29 INFO Remoting: Remoting started; listening on addresses:
      [akka.tcp://sparkandrews-mbp:5001]
      14/08/13 19:33:29 INFO Remoting: Remoting now listens on addresses:
      [akka.tcp://sparkandrews-mbp:5001]
      14/08/06 13:40:05 INFO Utils: Successfully started service 'spark' on port 5001.
      ```
      This commit renames this to "sparkDriver" and "sparkExecutor". The goal of this unambitious PR is simply to make the logged information more explicit without introducing any change in functionality.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1810 from andrewor14/service-name and squashes the following commits:
      
      8c459ed [Andrew Or] Use a common variable for driver/executor actor system names
      3a92843 [Andrew Or] Change actor name to sparkDriver and sparkExecutor
      921363e [Andrew Or] Merge branch 'master' of github.com:apache/spark into service-name
      c8c6a62 [Andrew Or] Do not include hyphens in actor name
      1c1b42e [Andrew Or] Avoid spaces in akka system name
      f644b55 [Andrew Or] Use more specific service name
      b21ae5bb
Loading