Skip to content
Snippets Groups Projects
  1. Aug 19, 2014
    • Vida Ha's avatar
      SPARK-2333 - spark_ec2 script should allow option for existing security group · 94053a7b
      Vida Ha authored
          - Uses the name tag to identify machines in a cluster.
          - Allows overriding the security group name so it doesn't need to coincide with the cluster name.
          - Outputs the request id's of up to 10 pending spot instance requests.
      
      Author: Vida Ha <vida@databricks.com>
      
      Closes #1899 from vidaha/vida/ec2-reuse-security-group and squashes the following commits:
      
      c80d5c3 [Vida Ha] wrap retries in a try catch block
      b2989d5 [Vida Ha] SPARK-2333: spark_ec2 script should allow option for existing security group
      94053a7b
    • freeman's avatar
      [SPARK-3128][MLLIB] Use streaming test suite for StreamingLR · 31f0b071
      freeman authored
      Refactored tests for streaming linear regression to use existing  streaming test utilities. Summary of changes:
      - Made ``mllib`` depend on tests from ``streaming``
      - Rewrote accuracy and convergence tests to use ``setupStreams`` and ``runStreams``
      - Added new test for the accuracy of predictions generated by ``predictOnValue``
      
      These tests should run faster, be easier to extend/maintain, and provide a reference for new tests.
      
      mengxr tdas
      
      Author: freeman <the.freeman.lab@gmail.com>
      
      Closes #2037 from freeman-lab/streamingLR-predict-tests and squashes the following commits:
      
      e851ca7 [freeman] Fixed long lines
      50eb0bf [freeman] Refactored tests to use streaming test tools
      32c43c2 [freeman] Added test for prediction
      31f0b071
    • Kousuke Saruta's avatar
      [SPARK-3089] Fix meaningless error message in ConnectionManager · cbfc26ba
      Kousuke Saruta authored
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #2000 from sarutak/SPARK-3089 and squashes the following commits:
      
      02dfdea [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-3089
      e759ce7 [Kousuke Saruta] Improved error message when closing SendingConnection
      cbfc26ba
    • Thomas Graves's avatar
      [SPARK-3072] YARN - Exit when reach max number failed executors · 7eb9cbc2
      Thomas Graves authored
      In some cases on hadoop 2.x the spark application master doesn't properly exit and hangs around for 10 minutes after its really done.  We should make sure it exits properly and stops the driver.
      
      Author: Thomas Graves <tgraves@apache.org>
      
      Closes #2022 from tgravescs/SPARK-3072 and squashes the following commits:
      
      665701d [Thomas Graves] Exit when reach max number failed executors
      7eb9cbc2
  2. Aug 18, 2014
    • Matt Forbes's avatar
      Fix typo in decision tree docs · cd0720ca
      Matt Forbes authored
      Candidate splits were inconsistent with the example.
      
      Author: Matt Forbes <matt@tellapart.com>
      
      Closes #1837 from emef/tree-doc and squashes the following commits:
      
      3be14a1 [Matt Forbes] Fix typo in decision tree docs
      cd0720ca
    • Reynold Xin's avatar
      [SPARK-3116] Remove the excessive lockings in TorrentBroadcast · 82577339
      Reynold Xin authored
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #2028 from rxin/torrentBroadcast and squashes the following commits:
      
      92c62a5 [Reynold Xin] Revert the MEMORY_AND_DISK_SER changes.
      03a5221 [Reynold Xin] [SPARK-3116] Remove the excessive lockings in TorrentBroadcast
      82577339
    • Josh Rosen's avatar
      [SPARK-3114] [PySpark] Fix Python UDFs in Spark SQL. · 1f1819b2
      Josh Rosen authored
      This fixes SPARK-3114, an issue where we inadvertently broke Python UDFs in Spark SQL.
      
      This PR modifiers the test runner script to always run the PySpark SQL tests, irrespective of whether SparkSQL itself has been modified.  It also includes Davies' fix for the bug.
      
      Closes #2026.
      
      Author: Josh Rosen <joshrosen@apache.org>
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2027 from JoshRosen/pyspark-sql-fix and squashes the following commits:
      
      9af2708 [Davies Liu] bugfix: disable compression of command
      0d8d3a4 [Josh Rosen] Always run Python Spark SQL tests.
      1f1819b2
    • Xiangrui Meng's avatar
      [SPARK-3108][MLLIB] add predictOnValues to StreamingLR and fix predictOn · 217b5e91
      Xiangrui Meng authored
      It is useful in streaming to allow users to carry extra data with the prediction, for monitoring the prediction error for example. freeman-lab
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #2023 from mengxr/predict-on-values and squashes the following commits:
      
      cac47b8 [Xiangrui Meng] add classtag
      2821b3b [Xiangrui Meng] use mapValues
      0925efa [Xiangrui Meng] add predictOnValues to StreamingLR and fix predictOn
      217b5e91
    • Joseph K. Bradley's avatar
      [SPARK-2850] [SPARK-2626] [mllib] MLlib stats examples + small fixes · c8b16ca0
      Joseph K. Bradley authored
      Added examples for statistical summarization:
      * Scala: StatisticalSummary.scala
      ** Tests: correlation, MultivariateOnlineSummarizer
      * python: statistical_summary.py
      ** Tests: correlation (since MultivariateOnlineSummarizer has no Python API)
      
      Added examples for random and sampled RDDs:
      * Scala: RandomAndSampledRDDs.scala
      * python: random_and_sampled_rdds.py
      * Both test:
      ** RandomRDDGenerators.normalRDD, normalVectorRDD
      ** RDD.sample, takeSample, sampleByKey
      
      Added sc.stop() to all examples.
      
      CorrelationSuite.scala
      * Added 1 test for RDDs with only 1 value
      
      RowMatrix.scala
      * numCols(): Added check for numRows = 0, with error message.
      * computeCovariance(): Added check for numRows <= 1, with error message.
      
      Python SparseVector (pyspark/mllib/linalg.py)
      * Added toDense() function
      
      python/run-tests script
      * Added stat.py (doc test)
      
      CC: mengxr dorx  Main changes were examples to show usage across APIs.
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #1878 from jkbradley/mllib-stats-api-check and squashes the following commits:
      
      ea5c047 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into mllib-stats-api-check
      dafebe2 [Joseph K. Bradley] Bug fixes for examples SampledRDDs.scala and sampled_rdds.py: Check for division by 0 and for missing key in maps.
      8d1e555 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into mllib-stats-api-check
      60c72d9 [Joseph K. Bradley] Fixed stat.py doc test to work for Python versions printing nan or NaN.
      b20d90a [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into mllib-stats-api-check
      4e5d15e [Joseph K. Bradley] Changed pyspark/mllib/stat.py doc tests to use NaN instead of nan.
      32173b7 [Joseph K. Bradley] Stats examples update.
      c8c20dc [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into mllib-stats-api-check
      cf70b07 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into mllib-stats-api-check
      0b7cec3 [Joseph K. Bradley] Small updates based on code review.  Renamed statistical_summary.py to correlations.py
      ab48f6e [Joseph K. Bradley] RowMatrix.scala * numCols(): Added check for numRows = 0, with error message. * computeCovariance(): Added check for numRows <= 1, with error message.
      65e4ebc [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into mllib-stats-api-check
      8195c78 [Joseph K. Bradley] Added examples for random and sampled RDDs: * Scala: RandomAndSampledRDDs.scala * python: random_and_sampled_rdds.py * Both test: ** RandomRDDGenerators.normalRDD, normalVectorRDD ** RDD.sample, takeSample, sampleByKey
      064985b [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into mllib-stats-api-check
      ee918e9 [Joseph K. Bradley] Added examples for statistical summarization: * Scala: StatisticalSummary.scala ** Tests: correlation, MultivariateOnlineSummarizer * python: statistical_summary.py ** Tests: correlation (since MultivariateOnlineSummarizer has no Python API)
      c8b16ca0
    • Joseph K. Bradley's avatar
      [mllib] DecisionTree: treeAggregate + Python example bug fix · 115eeb30
      Joseph K. Bradley authored
      Small DecisionTree updates:
      * Changed main DecisionTree aggregate to treeAggregate.
      * Fixed bug in python example decision_tree_runner.py with missing argument (since categoricalFeaturesInfo is no longer an optional argument for trainClassifier).
      * Fixed same bug in python doc tests, and added tree.py to doc tests.
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #2015 from jkbradley/dt-opt2 and squashes the following commits:
      
      b5114fa [Joseph K. Bradley] Fixed python tree.py doc test (extra newline)
      8e4665d [Joseph K. Bradley] Added tree.py to python doc tests.  Fixed bug from missing categoricalFeaturesInfo argument.
      b7b2922 [Joseph K. Bradley] Fixed bug in python example decision_tree_runner.py with missing argument.  Changed main DecisionTree aggregate to treeAggregate.
      85bbc1f [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt2
      66d076f [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt2
      a0ed0da [Joseph K. Bradley] Renamed DTMetadata to DecisionTreeMetadata.  Small doc updates.
      3726d20 [Joseph K. Bradley] Small code improvements based on code review.
      ac0b9f8 [Joseph K. Bradley] Small updates based on code review. Main change: Now using << instead of math.pow.
      db0d773 [Joseph K. Bradley] scala style fix
      6a38f48 [Joseph K. Bradley] Added DTMetadata class for cleaner code
      931a3a7 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt2
      797f68a [Joseph K. Bradley] Fixed DecisionTreeSuite bug for training second level.  Needed to update treePointToNodeIndex with groupShift.
      f40381c [Joseph K. Bradley] Merge branch 'dt-opt1' into dt-opt2
      5f2dec2 [Joseph K. Bradley] Fixed scalastyle issue in TreePoint
      6b5651e [Joseph K. Bradley] Updates based on code review.  1 major change: persisting to memory + disk, not just memory.
      2d2aaaf [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt1
      26d10dd [Joseph K. Bradley] Removed tree/model/Filter.scala since no longer used.  Removed debugging println calls in DecisionTree.scala.
      356daba [Joseph K. Bradley] Merge branch 'dt-opt1' into dt-opt2
      430d782 [Joseph K. Bradley] Added more debug info on binning error.  Added some docs.
      d036089 [Joseph K. Bradley] Print timing info to logDebug.
      e66f1b1 [Joseph K. Bradley] TreePoint * Updated doc * Made some methods private
      8464a6e [Joseph K. Bradley] Moved TimeTracker to tree/impl/ in its own file, and cleaned it up.  Removed debugging println calls from DecisionTree.  Made TreePoint extend Serialiable
      a87e08f [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt1
      c1565a5 [Joseph K. Bradley] Small DecisionTree updates: * Simplification: Updated calculateGainForSplit to take aggregates for a single (feature, split) pair. * Internal doc: findAggForOrderedFeatureClassification
      b914f3b [Joseph K. Bradley] DecisionTree optimization: eliminated filters + small changes
      b2ed1f3 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt
      0f676e2 [Joseph K. Bradley] Optimizations + Bug fix for DecisionTree
      3211f02 [Joseph K. Bradley] Optimizing DecisionTree * Added TreePoint representation to avoid calling findBin multiple times. * (not working yet, but debugging)
      f61e9d2 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-timing
      bcf874a [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-timing
      511ec85 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-timing
      a95bc22 [Joseph K. Bradley] timing for DecisionTree internals
      115eeb30
    • Marcelo Vanzin's avatar
      [SPARK-2718] [yarn] Handle quotes and other characters in user args. · 6201b276
      Marcelo Vanzin authored
      Due to the way Yarn runs things through bash, normal quoting doesn't
      work as expected. This change applies the necessary voodoo to the user
      args to avoid issues with bash and special characters.
      
      The change also uncovered an issue with the event logger app name
      sanitizing code; it wasn't cleaning up all "bad" characters, so
      sometimes it would fail to create the log dirs. I just added some
      more bad character replacements.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #1724 from vanzin/SPARK-2718 and squashes the following commits:
      
      cc84b89 [Marcelo Vanzin] Review feedback.
      c1a257a [Marcelo Vanzin] Add test for backslashes.
      55571d4 [Marcelo Vanzin] Unbreak yarn-client.
      515613d [Marcelo Vanzin] [SPARK-2718] [yarn] Handle quotes and other characters in user args.
      6201b276
    • Davies Liu's avatar
      [SPARK-3103] [PySpark] fix saveAsTextFile() with utf-8 · d1d0ee41
      Davies Liu authored
      bugfix: It will raise an exception when it try to encode non-ASCII strings into unicode. It should only encode unicode as "utf-8".
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2018 from davies/fix_utf8 and squashes the following commits:
      
      4db7967 [Davies Liu] fix saveAsTextFile() with utf-8
      d1d0ee41
    • Reynold Xin's avatar
      3a5962f0
    • Marcelo Vanzin's avatar
      [SPARK-2169] Don't copy appName / basePath everywhere. · 66ade00f
      Marcelo Vanzin authored
      Instead of keeping copies in all pages, just reference the values
      kept in the base SparkUI instance (by making them available via
      getters).
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #1252 from vanzin/SPARK-2169 and squashes the following commits:
      
      4412fc6 [Marcelo Vanzin] Simplify UIUtils.headerSparkPage signature.
      4e5d35a [Marcelo Vanzin] [SPARK-2169] Don't copy appName / basePath everywhere.
      66ade00f
    • Michael Armbrust's avatar
      [SPARK-2406][SQL] Initial support for using ParquetTableScan to read HiveMetaStore tables. · 3abd0c1c
      Michael Armbrust authored
      This PR adds an experimental flag `spark.sql.hive.convertMetastoreParquet` that when true causes the planner to detects tables that use Hive's Parquet SerDe and instead plans them using Spark SQL's native `ParquetTableScan`.
      
      Author: Michael Armbrust <michael@databricks.com>
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1819 from marmbrus/parquetMetastore and squashes the following commits:
      
      1620079 [Michael Armbrust] Revert "remove hive parquet bundle"
      cc30430 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into parquetMetastore
      4f3d54f [Michael Armbrust] fix style
      41ebc5f [Michael Armbrust] remove hive parquet bundle
      a43e0da [Michael Armbrust] Merge remote-tracking branch 'origin/master' into parquetMetastore
      4c4dc19 [Michael Armbrust] Fix bug with tree splicing.
      ebb267e [Michael Armbrust] include parquet hive to tests pass (Remove this later).
      c0d9b72 [Michael Armbrust] Avoid creating a HadoopRDD per partition.  Add dirty hacks to retrieve partition values from the InputSplit.
      8cdc93c [Michael Armbrust] Merge pull request #8 from yhuai/parquetMetastore
      a0baec7 [Yin Huai] Partitioning columns can be resolved.
      1161338 [Michael Armbrust] Add a test to make sure conversion is actually happening
      212d5cd [Michael Armbrust] Initial support for using ParquetTableScan to read HiveMetaStore tables.
      3abd0c1c
    • Matei Zaharia's avatar
      [SPARK-3091] [SQL] Add support for caching metadata on Parquet files · 9eb74c7d
      Matei Zaharia authored
      For larger Parquet files, reading the file footers (which is done in parallel on up to 5 threads) and HDFS block locations (which is serial) can take multiple seconds. We can add an option to cache this data within FilteringParquetInputFormat. Unfortunately ParquetInputFormat only caches footers within each instance of ParquetInputFormat, not across them.
      
      Note: this PR leaves this turned off by default for 1.1, but I believe it's safe to turn it on after. The keys in the hash maps are FileStatus objects that include a modification time, so this will work fine if files are modified. The location cache could become invalid if files have moved within HDFS, but that's rare so I just made it invalidate entries every 15 minutes.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #2005 from mateiz/parquet-cache and squashes the following commits:
      
      dae8efe [Matei Zaharia] Bug fix
      c71e9ed [Matei Zaharia] Handle empty statuses directly
      22072b0 [Matei Zaharia] Use Guava caches and add a config option for caching metadata
      8fb56ce [Matei Zaharia] Cache file block locations too
      453bd21 [Matei Zaharia] Bug fix
      4094df6 [Matei Zaharia] First attempt at caching Parquet footers
      9eb74c7d
    • Patrick Wendell's avatar
      SPARK-3025 [SQL]: Allow JDBC clients to set a fair scheduler pool · 6bca8898
      Patrick Wendell authored
      This definitely needs review as I am not familiar with this part of Spark.
      I tested this locally and it did seem to work.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #1937 from pwendell/scheduler and squashes the following commits:
      
      b858e33 [Patrick Wendell] SPARK-3025: Allow JDBC clients to set a fair scheduler pool
      6bca8898
    • Matei Zaharia's avatar
      [SPARK-3085] [SQL] Use compact data structures in SQL joins · 4bf3de71
      Matei Zaharia authored
      This reuses the CompactBuffer from Spark Core to save memory and pointer
      dereferences. I also tried AppendOnlyMap instead of java.util.HashMap
      but unfortunately that slows things down because it seems to do more
      equals() calls and the equals on GenericRow, and especially JoinedRow,
      is pretty expensive.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #1993 from mateiz/spark-3085 and squashes the following commits:
      
      188221e [Matei Zaharia] Remove unneeded import
      5f903ee [Matei Zaharia] [SPARK-3085] [SQL] Use compact data structures in SQL joins
      4bf3de71
    • Matei Zaharia's avatar
      [SPARK-3084] [SQL] Collect broadcasted tables in parallel in joins · 6a13dca1
      Matei Zaharia authored
      BroadcastHashJoin has a broadcastFuture variable that tries to collect
      the broadcasted table in a separate thread, but this doesn't help
      because it's a lazy val that only gets initialized when you attempt to
      build the RDD. Thus queries that broadcast multiple tables would collect
      and broadcast them sequentially. I changed this to a val to let it start
      collecting right when the operator is created.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #1990 from mateiz/spark-3084 and squashes the following commits:
      
      f468766 [Matei Zaharia] [SPARK-3084] Collect broadcasted tables in parallel in joins
      6a13dca1
    • Patrick Wendell's avatar
      SPARK-3096: Include parquet hive serde by default in build · 7ae28d12
      Patrick Wendell authored
      A small change - we should just add this dependency. It doesn't have any recursive deps and it's needed for reading have parquet tables.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #2009 from pwendell/parquet and squashes the following commits:
      
      e411f9f [Patrick Wendell] SPARk-309: Include parquet hive serde by default in build
      7ae28d12
    • Chandan Kumar's avatar
      [SPARK-2862] histogram method fails on some choices of bucketCount · f45efbb8
      Chandan Kumar authored
      Author: Chandan Kumar <chandan.kumar@imaginea.com>
      
      Closes #1787 from nrchandan/spark-2862 and squashes the following commits:
      
      a76bbf6 [Chandan Kumar] [SPARK-2862] Fix for a broken test case and add new test cases
      4211eea [Chandan Kumar] [SPARK-2862] Add Scala bug id
      13854f1 [Chandan Kumar] [SPARK-2862] Use shorthand range notation to avoid Scala bug
      f45efbb8
    • CrazyJvm's avatar
      SPARK-3093 : masterLock in Worker is no longer need · c0cbbdea
      CrazyJvm authored
      there's no need to use masterLock in Worker now since all communications are within Akka actor
      
      Author: CrazyJvm <crazyjvm@gmail.com>
      
      Closes #2008 from CrazyJvm/no-need-master-lock and squashes the following commits:
      
      dd39e20 [CrazyJvm] fix format
      58e7fa5 [CrazyJvm] there's no need to use masterLock now since all communications are within Akka actor
      c0cbbdea
    • Liquan Pei's avatar
      [MLlib] Remove transform(dataset: RDD[String]) from Word2Vec public API · 9306b8c6
      Liquan Pei authored
      mengxr
      Remove  transform(dataset: RDD[String]) from public API.
      
      Author: Liquan Pei <liquanpei@gmail.com>
      
      Closes #2010 from Ishiihara/Word2Vec-api and squashes the following commits:
      
      17b1031 [Liquan Pei] remove transform(dataset: RDD[String]) from public API
      9306b8c6
    • Liquan Pei's avatar
      [SPARK-2842][MLlib]Word2Vec documentation · eef779b8
      Liquan Pei authored
      mengxr
      Documentation for Word2Vec
      
      Author: Liquan Pei <liquanpei@gmail.com>
      
      Closes #2003 from Ishiihara/Word2Vec-doc and squashes the following commits:
      
      4ff11d4 [Liquan Pei] minor fix
      8d7458f [Liquan Pei] code reformat
      6df0dcb [Liquan Pei] add Word2Vec documentation
      eef779b8
    • Liquan Pei's avatar
      [SPARK-3097][MLlib] Word2Vec performance improvement · 3c8fa505
      Liquan Pei authored
      mengxr Please review the code. Adding weights in reduceByKey soon.
      
      Only output model entry for words appeared in the partition before merging and use reduceByKey to combine model. In general, this implementation is 30s or so faster than implementation using big array.
      
      Author: Liquan Pei <liquanpei@gmail.com>
      
      Closes #1932 from Ishiihara/Word2Vec-improve2 and squashes the following commits:
      
      d5377a9 [Liquan Pei] use syn0Global and syn1Global to represent model
      cad2011 [Liquan Pei] bug fix for synModify array out of bound
      083aa66 [Liquan Pei] update synGlobal in place and reduce synOut size
      9075e1c [Liquan Pei] combine syn0Global and syn1Global to synGlobal
      aa2ab36 [Liquan Pei] use reduceByKey to combine models
      3c8fa505
    • Sandy Ryza's avatar
      SPARK-2900. aggregate inputBytes per stage · df652ea0
      Sandy Ryza authored
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #1826 from sryza/sandy-spark-2900 and squashes the following commits:
      
      43f9091 [Sandy Ryza] SPARK-2900
      df652ea0
    • Patrick Wendell's avatar
  3. Aug 17, 2014
    • Xiangrui Meng's avatar
      [SPARK-3087][MLLIB] fix col indexing bug in chi-square and add a check for... · c77f4066
      Xiangrui Meng authored
      [SPARK-3087][MLLIB] fix col indexing bug in chi-square and add a check for number of distinct values
      
      There is a bug determining the column index. dorx
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1997 from mengxr/chisq-index and squashes the following commits:
      
      8fc2ab2 [Xiangrui Meng] fix col indexing bug and add a check for number of distinct values
      c77f4066
    • Hari Shreedharan's avatar
      [HOTFIX][STREAMING] Allow the JVM/Netty to decide which port to bind to in Flume Polling Tests. · 95470a03
      Hari Shreedharan authored
      Author: Hari Shreedharan <harishreedharan@gmail.com>
      
      Closes #1820 from harishreedharan/use-free-ports and squashes the following commits:
      
      b939067 [Hari Shreedharan] Remove unused import.
      67856a8 [Hari Shreedharan] Remove findFreePort.
      0ea51d1 [Hari Shreedharan] Make some changes to getPort to use map on the serverOpt.
      1fb0283 [Hari Shreedharan] Merge branch 'master' of https://github.com/apache/spark into use-free-ports
      b351651 [Hari Shreedharan] Allow Netty to choose port, and query it to decide the port to bind to. Leaving findFreePort as is, if other tests want to use it at some point.
      e6c9620 [Hari Shreedharan] Making sure the second sink uses the correct port.
      11c340d [Hari Shreedharan] Add info about race condition to scaladoc.
      e89d135 [Hari Shreedharan] Adding Scaladoc.
      6013bb0 [Hari Shreedharan] [STREAMING] Find free ports to use before attempting to create Flume Sink in Flume Polling Suite
      95470a03
    • Chris Fregly's avatar
      [SPARK-1981] updated streaming-kinesis.md · 99243288
      Chris Fregly authored
      fixed markup, separated out sections more-clearly, more thorough explanations
      
      Author: Chris Fregly <chris@fregly.com>
      
      Closes #1757 from cfregly/master and squashes the following commits:
      
      9b1c71a [Chris Fregly] better explained why spark checkpoints are disabled in the example (due to no stateful operations being used)
      0f37061 [Chris Fregly] SPARK-1981:  (Kinesis streaming support) updated streaming-kinesis.md
      862df67 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      8e1ae2e [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      4774581 [Chris Fregly] updated docs, renamed retry to retryRandom to be more clear, removed retries around store() method
      0393795 [Chris Fregly] moved Kinesis examples out of examples/ and back into extras/kinesis-asl
      691a6be [Chris Fregly] fixed tests and formatting, fixed a bug with JavaKinesisWordCount during union of streams
      0e1c67b [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      74e5c7c [Chris Fregly] updated per TD's feedback.  simplified examples, updated docs
      e33cbeb [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      bf614e9 [Chris Fregly] per matei's feedback:  moved the kinesis examples into the examples/ dir
      d17ca6d [Chris Fregly] per TD's feedback:  updated docs, simplified the KinesisUtils api
      912640c [Chris Fregly] changed the foundKinesis class to be a publically-avail class
      db3eefd [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      21de67f [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      6c39561 [Chris Fregly] parameterized the versions of the aws java sdk and kinesis client
      338997e [Chris Fregly] improve build docs for kinesis
      828f8ae [Chris Fregly] more cleanup
      e7c8978 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      cd68c0d [Chris Fregly] fixed typos and backward compatibility
      d18e680 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      b3b0ff1 [Chris Fregly] [SPARK-1981] Add AWS Kinesis streaming support
      99243288
    • Michael Armbrust's avatar
      [SQL] Improve debug logging and toStrings. · bfa09b01
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2004 from marmbrus/codgenDebugging and squashes the following commits:
      
      b7a7e41 [Michael Armbrust] Improve debug logging and toStrings.
      bfa09b01
    • Michael Armbrust's avatar
      Revert "[SPARK-2970] [SQL] spark-sql script ends with IOException when EventLogging is enabled" · 5ecb08ea
      Michael Armbrust authored
      Revert #1891 due to issues with hadoop 1 compatibility.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2007 from marmbrus/revert1891 and squashes the following commits:
      
      68706c0 [Michael Armbrust] Revert "[SPARK-2970] [SQL] spark-sql script ends with IOException when EventLogging is enabled"
      5ecb08ea
    • Patrick Wendell's avatar
      SPARK-2881. Upgrade snappy-java to 1.1.1.3. · 318e28b5
      Patrick Wendell authored
      This upgrades snappy-java which fixes the issue reported in SPARK-2881.
      This is the master branch equivalent to #1994 which provides a different
      work-around for the 1.1 branch.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #1995 from pwendell/snappy-1.1 and squashes the following commits:
      
      0c7c4c2 [Patrick Wendell] SPARK-2881. Upgrade snappy-java to 1.1.1.3.
      318e28b5
    • Joseph K. Bradley's avatar
      [SPARK-3042] [mllib] DecisionTree Filter top-down instead of bottom-up · 73ab7f14
      Joseph K. Bradley authored
      DecisionTree needs to match each example to a node at each iteration.  It currently does this with a set of filters very inefficiently: For each example, it examines each node at the current level and traces up to the root to see if that example should be handled by that node.
      
      Fix: Filter top-down using the partly built tree itself.
      
      Major changes:
      * Eliminated Filter class, findBinsForLevel() method.
      * Set up node parent links in main loop over levels in train().
      * Added predictNodeIndex() for filtering top-down.
      * Added DTMetadata class
      
      Other changes:
      * Pre-compute set of unorderedFeatures.
      
      Notes for following expected PR based on [https://issues.apache.org/jira/browse/SPARK-3043]:
      * The unorderedFeatures set will next be stored in a metadata structure to simplify function calls (to store other items such as the data in strategy).
      
      I've done initial tests indicating that this speeds things up, but am only now running large-scale ones.
      
      CC: mengxr manishamde chouqin  Any comments are welcome---thanks!
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #1975 from jkbradley/dt-opt2 and squashes the following commits:
      
      a0ed0da [Joseph K. Bradley] Renamed DTMetadata to DecisionTreeMetadata.  Small doc updates.
      3726d20 [Joseph K. Bradley] Small code improvements based on code review.
      ac0b9f8 [Joseph K. Bradley] Small updates based on code review. Main change: Now using << instead of math.pow.
      db0d773 [Joseph K. Bradley] scala style fix
      6a38f48 [Joseph K. Bradley] Added DTMetadata class for cleaner code
      931a3a7 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt2
      797f68a [Joseph K. Bradley] Fixed DecisionTreeSuite bug for training second level.  Needed to update treePointToNodeIndex with groupShift.
      f40381c [Joseph K. Bradley] Merge branch 'dt-opt1' into dt-opt2
      5f2dec2 [Joseph K. Bradley] Fixed scalastyle issue in TreePoint
      6b5651e [Joseph K. Bradley] Updates based on code review.  1 major change: persisting to memory + disk, not just memory.
      2d2aaaf [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt1
      26d10dd [Joseph K. Bradley] Removed tree/model/Filter.scala since no longer used.  Removed debugging println calls in DecisionTree.scala.
      356daba [Joseph K. Bradley] Merge branch 'dt-opt1' into dt-opt2
      430d782 [Joseph K. Bradley] Added more debug info on binning error.  Added some docs.
      d036089 [Joseph K. Bradley] Print timing info to logDebug.
      e66f1b1 [Joseph K. Bradley] TreePoint * Updated doc * Made some methods private
      8464a6e [Joseph K. Bradley] Moved TimeTracker to tree/impl/ in its own file, and cleaned it up.  Removed debugging println calls from DecisionTree.  Made TreePoint extend Serialiable
      a87e08f [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt1
      c1565a5 [Joseph K. Bradley] Small DecisionTree updates: * Simplification: Updated calculateGainForSplit to take aggregates for a single (feature, split) pair. * Internal doc: findAggForOrderedFeatureClassification
      b914f3b [Joseph K. Bradley] DecisionTree optimization: eliminated filters + small changes
      b2ed1f3 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-opt
      0f676e2 [Joseph K. Bradley] Optimizations + Bug fix for DecisionTree
      3211f02 [Joseph K. Bradley] Optimizing DecisionTree * Added TreePoint representation to avoid calling findBin multiple times. * (not working yet, but debugging)
      f61e9d2 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-timing
      bcf874a [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-timing
      511ec85 [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dt-timing
      a95bc22 [Joseph K. Bradley] timing for DecisionTree internals
      73ab7f14
  4. Aug 16, 2014
    • Xiangrui Meng's avatar
      [SPARK-3077][MLLIB] fix some chisq-test · fbad7228
      Xiangrui Meng authored
      - promote nullHypothesis field in ChiSqTestResult to TestResult. Every test should have a null hypothesis
      - correct null hypothesis statement for independence test
      - p-value: 0.01 -> 0.1
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1982 from mengxr/fix-chisq and squashes the following commits:
      
      5f0de02 [Xiangrui Meng] make ChiSqTestResult constructor package private
      bc74ea1 [Xiangrui Meng] update chisq-test
      fbad7228
    • GuoQiang Li's avatar
      In the stop method of ConnectionManager to cancel the ackTimeoutMonitor · bc95fe08
      GuoQiang Li authored
      cc JoshRosen sarutak
      
      Author: GuoQiang Li <witgo@qq.com>
      
      Closes #1989 from witgo/cancel_ackTimeoutMonitor and squashes the following commits:
      
      4a700fa [GuoQiang Li] In the stop method of ConnectionManager to cancel the ackTimeoutMonitor
      bc95fe08
    • Davies Liu's avatar
      [SPARK-1065] [PySpark] improve supporting for large broadcast · 2fc8aca0
      Davies Liu authored
      Passing large object by py4j is very slow (cost much memory), so pass broadcast objects via files (similar to parallelize()).
      
      Add an option to keep object in driver (it's False by default) to save memory in driver.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1912 from davies/broadcast and squashes the following commits:
      
      e06df4a [Davies Liu] load broadcast from disk in driver automatically
      db3f232 [Davies Liu] fix serialization of accumulator
      631a827 [Davies Liu] Merge branch 'master' into broadcast
      c7baa8c [Davies Liu] compress serrialized broadcast and command
      9a7161f [Davies Liu] fix doc tests
      e93cf4b [Davies Liu] address comments: add test
      6226189 [Davies Liu] improve large broadcast
      2fc8aca0
    • iAmGhost's avatar
      [SPARK-3035] Wrong example with SparkContext.addFile · 379e7585
      iAmGhost authored
      https://issues.apache.org/jira/browse/SPARK-3035
      
      fix for wrong document.
      
      Author: iAmGhost <kdh7807@gmail.com>
      
      Closes #1942 from iAmGhost/master and squashes the following commits:
      
      487528a [iAmGhost] [SPARK-3035] Wrong example with SparkContext.addFile fix for wrong document.
      379e7585
    • Xiangrui Meng's avatar
      [SPARK-3081][MLLIB] rename RandomRDDGenerators to RandomRDDs · ac6411c6
      Xiangrui Meng authored
      `RandomRDDGenerators` means factory for `RandomRDDGenerator`. However, its methods return RDDs but not RDDGenerators. So a more proper (and shorter) name would be `RandomRDDs`.
      
      dorx brkyvz
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1979 from mengxr/randomrdds and squashes the following commits:
      
      b161a2d [Xiangrui Meng] rename RandomRDDGenerators to RandomRDDs
      ac6411c6
    • Xiangrui Meng's avatar
      [SPARK-3048][MLLIB] add LabeledPoint.parse and remove loadStreamingLabeledPoints · 7e70708a
      Xiangrui Meng authored
      Move `parse()` from `LabeledPointParser` to `LabeledPoint` and make it public. This breaks binary compatibility only when a user uses synthesized methods like `tupled` and `curried`, which is rare.
      
      `LabeledPoint.parse` is more consistent with `Vectors.parse`, which is why `LabeledPointParser` is not preferred.
      
      freeman-lab tdas
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #1952 from mengxr/labelparser and squashes the following commits:
      
      c818fb2 [Xiangrui Meng] merge master
      ce20e6f [Xiangrui Meng] update mima excludes
      b386b8d [Xiangrui Meng] fix tests
      2436b3d [Xiangrui Meng] add parse() to LabeledPoint
      7e70708a
Loading