Skip to content
Snippets Groups Projects
  1. Jun 04, 2014
  2. Jun 03, 2014
    • Joseph E. Gonzalez's avatar
      Enable repartitioning of graph over different number of partitions · 5284ca78
      Joseph E. Gonzalez authored
      It is currently very difficult to repartition a graph over a different number of partitions.  This PR adds an additional `partitionBy` function that takes the number of partitions.
      
      Author: Joseph E. Gonzalez <joseph.e.gonzalez@gmail.com>
      
      Closes #719 from jegonzal/graph_partitioning_options and squashes the following commits:
      
      730b405 [Joseph E. Gonzalez] adding an additional number of partitions option to partitionBy
      5284ca78
    • Xiangrui Meng's avatar
      use env default python in merge_spark_pr.py · e8d93ee5
      Xiangrui Meng authored
      A minor change to use env default python instead of fixed `/usr/bin/python`.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #965 from mengxr/merge-pr-python and squashes the following commits:
      
      1ae0013 [Xiangrui Meng] use env default python in merge_spark_pr.py
      e8d93ee5
    • Reynold Xin's avatar
      SPARK-1941: Update streamlib to 2.7.0 and use HyperLogLogPlus instead of HyperLogLog. · 1faef149
      Reynold Xin authored
      I also corrected some errors made in the previous HLL count approximate API, including relativeSD wasn't really a measure for error (and we used it to test error bounds in test results).
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #897 from rxin/hll and squashes the following commits:
      
      4d83f41 [Reynold Xin] New error bound and non-randomness.
      f154ea0 [Reynold Xin] Added a comment on the value bound for testing.
      e367527 [Reynold Xin] One more round of code review.
      41e649a [Reynold Xin] Update final mima list.
      9e320c8 [Reynold Xin] Incorporate code review feedback.
      e110d70 [Reynold Xin] Merge branch 'master' into hll
      354deb8 [Reynold Xin] Added comment on the Mima exclude rules.
      acaa524 [Reynold Xin] Added the right exclude rules in MimaExcludes.
      6555bfe [Reynold Xin] Added a default method and re-arranged MimaExcludes.
      1db1522 [Reynold Xin] Excluded util.SerializableHyperLogLog from MIMA check.
      9221b27 [Reynold Xin] Merge branch 'master' into hll
      88cfe77 [Reynold Xin] Updated documentation and restored the old incorrect API to maintain API compatibility.
      1294be6 [Reynold Xin] Updated HLL+.
      e7786cb [Reynold Xin] Merge branch 'master' into hll
      c0ef0c2 [Reynold Xin] SPARK-1941: Update streamlib to 2.7.0 and use HyperLogLogPlus instead of HyperLogLog.
      1faef149
    • Kan Zhang's avatar
      [SPARK-1161] Add saveAsPickleFile and SparkContext.pickleFile in Python · 21e40ed8
      Kan Zhang authored
      Author: Kan Zhang <kzhang@apache.org>
      
      Closes #755 from kanzhang/SPARK-1161 and squashes the following commits:
      
      24ed8a2 [Kan Zhang] [SPARK-1161] Fixing doc tests
      44e0615 [Kan Zhang] [SPARK-1161] Adding an optional batchSize with default value 10
      d929429 [Kan Zhang] [SPARK-1161] Add saveAsObjectFile and SparkContext.objectFile in Python
      21e40ed8
    • DB Tsai's avatar
      Fixed a typo · f4dd665c
      DB Tsai authored
      in RowMatrix.scala
      
      Author: DB Tsai <dbtsai@dbtsai.com>
      
      Closes #959 from dbtsai/dbtsai-typo and squashes the following commits:
      
      fab0e0e [DB Tsai] Fixed typo
      f4dd665c
    • Ankur Dave's avatar
      [SPARK-1991] Support custom storage levels for vertices and edges · b1feb602
      Ankur Dave authored
      This PR adds support for specifying custom storage levels for the vertices and edges of a graph. This enables GraphX to handle graphs larger than memory size by specifying MEMORY_AND_DISK and then repartitioning the graph to use many small partitions, each of which does fit in memory. Spark will then automatically load partitions from disk as needed.
      
      The user specifies the desired vertex and edge storage levels when building the graph by passing them to the graph constructor. These are then stored in the `targetStorageLevel` attribute of the VertexRDD and EdgeRDD respectively. Whenever GraphX needs to cache a VertexRDD or EdgeRDD (because it plans to use it more than once, for example), it uses the specified target storage level. Also, when the user calls `Graph#cache()`, the vertices and edges are persisted using their target storage levels.
      
      In order to facilitate propagating the target storage levels across VertexRDD and EdgeRDD operations, we remove raw calls to the constructors and instead introduce the `withPartitionsRDD` and `withTargetStorageLevel` methods.
      
      I tested this change by running PageRank and triangle count on a severely memory-constrained cluster (1 executor with 300 MB of memory, and a 1 GB graph). Before this PR, these algorithms used to fail with OutOfMemoryErrors. With this PR, and using the DISK_ONLY storage level, they succeed.
      
      Author: Ankur Dave <ankurdave@gmail.com>
      
      Closes #946 from ankurdave/SPARK-1991 and squashes the following commits:
      
      ce17d95 [Ankur Dave] Move pickStorageLevel to StorageLevel.fromString
      ccaf06f [Ankur Dave] Shadow members in withXYZ() methods rather than using underscores
      c34abc0 [Ankur Dave] Exclude all of GraphX from compatibility checks vs. 1.0.0
      c5ca068 [Ankur Dave] Revert "Exclude all of GraphX from binary compatibility checks"
      34bcefb [Ankur Dave] Exclude all of GraphX from binary compatibility checks
      6fdd137 [Ankur Dave] [SPARK-1991] Support custom storage levels for vertices and edges
      b1feb602
    • Joseph E. Gonzalez's avatar
      Synthetic GraphX Benchmark · 894ecde0
      Joseph E. Gonzalez authored
      This PR accomplishes two things:
      
      1. It introduces a Synthetic Benchmark application that generates an arbitrarily large log-normal graph and executes either PageRank or connected components on the graph.  This can be used to profile GraphX system on arbitrary clusters without access to large graph datasets
      
      2. This PR improves the implementation of the log-normal graph generator.
      
      Author: Joseph E. Gonzalez <joseph.e.gonzalez@gmail.com>
      Author: Ankur Dave <ankurdave@gmail.com>
      
      Closes #720 from jegonzal/graphx_synth_benchmark and squashes the following commits:
      
      e40812a [Ankur Dave] Exclude all of GraphX from compatibility checks vs. 1.0.0
      bccccad [Ankur Dave] Fix long lines
      374678a [Ankur Dave] Bugfix and style changes
      1bdf39a [Joseph E. Gonzalez] updating options
      d943972 [Joseph E. Gonzalez] moving the benchmark application into the examples folder.
      f4f839a [Joseph E. Gonzalez] Creating a synthetic benchmark script.
      894ecde0
    • baishuo(白硕)'s avatar
      fix java.lang.ClassCastException · aa41a522
      baishuo(白硕) authored
      get Exception when run:bin/run-example org.apache.spark.examples.sql.RDDRelation
      Exception's detail is:
      Exception in thread "main" java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer
      	at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106)
      	at org.apache.spark.sql.catalyst.expressions.GenericRow.getInt(Row.scala:145)
      	at org.apache.spark.examples.sql.RDDRelation$.main(RDDRelation.scala:49)
      	at org.apache.spark.examples.sql.RDDRelation.main(RDDRelation.scala)
      change sql("SELECT COUNT(*) FROM records").collect().head.getInt(0) to sql("SELECT COUNT(*) FROM records").collect().head.getLong(0), then the Exception do not occur any more
      
      Author: baishuo(白硕) <vc_java@hotmail.com>
      
      Closes #949 from baishuo/master and squashes the following commits:
      
      f4b319f [baishuo(白硕)] fix java.lang.ClassCastException
      aa41a522
    • Erik Selin's avatar
      [SPARK-1468] Modify the partition function used by partitionBy. · 8edc9d03
      Erik Selin authored
      Make partitionBy use a tweaked version of hash as its default partition function
      since the python hash function does not consistently assign the same value
      to None across python processes.
      
      Associated JIRA at https://issues.apache.org/jira/browse/SPARK-1468
      
      Author: Erik Selin <erik.selin@jadedpixel.com>
      
      Closes #371 from tyro89/consistent_hashing and squashes the following commits:
      
      201c301 [Erik Selin] Make partitionBy use a tweaked version of hash as its default partition function since the python hash function does not consistently assign the same value to None across python processes.
      8edc9d03
    • tzolov's avatar
      Add support for Pivotal HD in the Maven build: SPARK-1992 · b1f28535
      tzolov authored
      Allow Spark to build against particular Pivotal HD distributions. For example to build Spark against Pivotal HD 2.0.1 one can run:
      ```
      mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0-gphd-3.0.1.0 -DskipTests clean package
      ```
      
      Author: tzolov <christian.tzolov@gmail.com>
      
      Closes #942 from tzolov/master and squashes the following commits:
      
      bc3e05a [tzolov] Add support for Pivotal HD in the Maven build and SBT build: [SPARK-1992]
      b1f28535
    • Wenchen Fan(Cloud)'s avatar
      [SPARK-1912] fix compress memory issue during reduce · 45e9bc85
      Wenchen Fan(Cloud) authored
      When we need to read a compressed block, we will first create a compress stream instance(LZF or Snappy) and use it to wrap that block.
      Let's say a reducer task need to read 1000 local shuffle blocks, it will first prepare to read that 1000 blocks, which means create 1000 compression stream instance to wrap them. But the initialization of compression instance will allocate some memory and when we have many compression instance at the same time, it is a problem.
      Actually reducer reads the shuffle blocks one by one, so we can do the compression instance initialization lazily.
      
      Author: Wenchen Fan(Cloud) <cloud0fan@gmail.com>
      
      Closes #860 from cloud-fan/fix-compress and squashes the following commits:
      
      0924a6b [Wenchen Fan(Cloud)] rename 'doWork' into 'getIterator'
      07f32c22 [Wenchen Fan(Cloud)] move the LazyProxyIterator to dataDeserialize
      d80c426 [Wenchen Fan(Cloud)] remove empty lines in short class
      2c8adb2 [Wenchen Fan(Cloud)] add inline comment
      8ebff77 [Wenchen Fan(Cloud)] fix compress memory issue during reduce
      45e9bc85
    • Henry Saputra's avatar
      SPARK-2001 : Remove docs/spark-debugger.md from master · 6c044ed1
      Henry Saputra authored
      Per discussion in dev list:
      "
      Seemed like the spark-debugger.md is no longer accurate (see
      http://spark.apache.org/docs/latest/spark-debugger.html) and since it
      was originally written Spark has evolved that makes the doc obsolete.
      There are already work pending for new replay debugging (I could not
      find the PR links for it) so I
      With version control we could always reinstate the old doc if needed,
      but as of today the doc is no longer reflect the current state of
      Spark's RDD.
      "
      
      Author: Henry Saputra <henry.saputra@gmail.com>
      
      Closes #953 from hsaputra/SPARK-2001-hsaputra and squashes the following commits:
      
      dc324aa [Henry Saputra] SPARK-2001 : Remove docs/spark-debugger.md from master since it is obsolete
      6c044ed1
    • Syed Hashmi's avatar
      [SPARK-1942] Stop clearing spark.driver.port in unit tests · 7782a304
      Syed Hashmi authored
      stop resetting spark.driver.port in unit tests (scala, java and python).
      
      Author: Syed Hashmi <shashmi@cloudera.com>
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #943 from syedhashmi/master and squashes the following commits:
      
      885f210 [Syed Hashmi] Removing unnecessary file (created by mergetool)
      b8bd4b5 [Syed Hashmi] Merge remote-tracking branch 'upstream/master'
      b895e59 [Syed Hashmi] Revert "[SPARK-1784] Add a new partitioner"
      57b6587 [Syed Hashmi] Revert "[SPARK-1784] Add a balanced partitioner"
      1574769 [Syed Hashmi] [SPARK-1942] Stop clearing spark.driver.port in unit tests
      4354836 [Syed Hashmi] Revert "SPARK-1686: keep schedule() calling in the main thread"
      fd36542 [Syed Hashmi] [SPARK-1784] Add a balanced partitioner
      6668015 [CodingCat] SPARK-1686: keep schedule() calling in the main thread
      4ca94cc [Syed Hashmi] [SPARK-1784] Add a new partitioner
      7782a304
  3. Jun 02, 2014
    • Cheng Lian's avatar
      Avoid dynamic dispatching when unwrapping Hive data. · 862283e9
      Cheng Lian authored
      This is a follow up of PR #758.
      
      The `unwrapHiveData` function is now composed statically before actual rows are scanned according to the field object inspector to avoid dynamic dispatching cost.
      
      According to the same micro benchmark used in PR #758, this simple change brings slight performance boost: 2.5% for CSV table and 1% for RCFile table.
      
      ```
      Optimized version:
      
      CSV: 6870 ms, RCFile: 5687 ms
      CSV: 6832 ms, RCFile: 5800 ms
      CSV: 6822 ms, RCFile: 5679 ms
      CSV: 6704 ms, RCFile: 5758 ms
      CSV: 6819 ms, RCFile: 5725 ms
      
      Original version:
      
      CSV: 7042 ms, RCFile: 5667 ms
      CSV: 6883 ms, RCFile: 5703 ms
      CSV: 7115 ms, RCFile: 5665 ms
      CSV: 7020 ms, RCFile: 5981 ms
      CSV: 6871 ms, RCFile: 5906 ms
      ```
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #935 from liancheng/staticUnwrapping and squashes the following commits:
      
      c49c70c [Cheng Lian] Avoid dynamic dispatching when unwrapping Hive data.
      862283e9
    • egraldlo's avatar
      [SPARK-1995][SQL] system function upper and lower can be supported · ec8be274
      egraldlo authored
      I don't know whether it's time to implement system function about string operation in spark sql now.
      
      Author: egraldlo <egraldlo@gmail.com>
      
      Closes #936 from egraldlo/stringoperator and squashes the following commits:
      
      3c6c60a [egraldlo] Add UPPER, LOWER, MAX and MIN into hive parser
      ea76d0a [egraldlo] modify the formatting issues
      b49f25e [egraldlo] modify the formatting issues
      1f0bbb5 [egraldlo] system function upper and lower supported
      13d3267 [egraldlo] system function upper and lower supported
      ec8be274
    • Cheng Lian's avatar
      [SPARK-1958] Calling .collect() on a SchemaRDD should call executeCollect() on... · d000ca98
      Cheng Lian authored
      [SPARK-1958] Calling .collect() on a SchemaRDD should call executeCollect() on the underlying query plan.
      
      In cases like `Limit` and `TakeOrdered`, `executeCollect()` makes optimizations that `execute().collect()` will not.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #939 from liancheng/spark-1958 and squashes the following commits:
      
      bdc4a14 [Cheng Lian] Copy rows to present immutable data to users
      8250976 [Cheng Lian] Added return type explicitly for public API
      192a25c [Cheng Lian] [SPARK-1958] Calling .collect() on a SchemaRDD should call executeCollect() on the underlying query plan.
      d000ca98
    • Tor Myklebust's avatar
      [SPARK-1553] Alternating nonnegative least-squares · 9a5d482e
      Tor Myklebust authored
      This pull request includes a nonnegative least-squares solver (NNLS) tailored to the kinds of small-scale problems that come up when training matrix factorisation models by alternating nonnegative least-squares (ANNLS).
      
      The method used for the NNLS subproblems is based on the classical method of projected gradients.  There is a modification where, if the set of active constraints has not changed since the last iteration, a conjugate gradient step is considered and possibly rejected in favour of the gradient; this improves convergence once the optimal face has been located.
      
      The NNLS solver is in `org.apache.spark.mllib.optimization.NNLSbyPCG`.
      
      Author: Tor Myklebust <tmyklebu@gmail.com>
      
      Closes #460 from tmyklebu/annls and squashes the following commits:
      
      79bc4b5 [Tor Myklebust] Merge branch 'master' of https://github.com/apache/spark into annls
      199b0bc [Tor Myklebust] Make the ctor private again and use the builder pattern.
      7fbabf1 [Tor Myklebust] Cleanup matrix math in NNLSSuite.
      65ef7f2 [Tor Myklebust] Make ALS's ctor public and remove a couple of "convenience" wrappers.
      2d4f3cb [Tor Myklebust] Cleanup.
      0cb4481 [Tor Myklebust] Drop the iteration limit from 40k to max(400,20n).
      e2a01d1 [Tor Myklebust] Create a workspace object for NNLS to cut down on memory allocations.
      b285106 [Tor Myklebust] Clean up NNLS test cases.
      9c820b6 [Tor Myklebust] Tweak variable names.
      8a1a436 [Tor Myklebust] Describe the problem and add a reference to Polyak's paper.
      5345402 [Tor Myklebust] Style fixes that got eaten.
      ac673bd [Tor Myklebust] More safeguards against numerical ridiculousness.
      c288b6a [Tor Myklebust] Finish moving the NNLS solver.
      9a82fa6 [Tor Myklebust] Fix scalastyle moanings.
      33bf4f2 [Tor Myklebust] Fix missing space.
      89ea0a8 [Tor Myklebust] Hack ALSSuite to support NNLS testing.
      f5dbf4d [Tor Myklebust] Teach ALS how to use the NNLS solver.
      6cb563c [Tor Myklebust] Tests for the nonnegative least squares solver.
      a68ac10 [Tor Myklebust] A nonnegative least-squares solver.
      9a5d482e
    • Ankur Dave's avatar
      Add landmark-based Shortest Path algorithm to graphx.lib · 9535f404
      Ankur Dave authored
      This is a modified version of apache/spark#10.
      
      Author: Ankur Dave <ankurdave@gmail.com>
      Author: Andres Perez <andres@tresata.com>
      
      Closes #933 from ankurdave/shortestpaths and squashes the following commits:
      
      03a103c [Ankur Dave] Style fixes
      7a1ff48 [Ankur Dave] Improve ShortestPaths documentation
      d75c8fc [Ankur Dave] Remove unnecessary VD type param, and pass through ED
      d983fb4 [Ankur Dave] Fix style errors
      60ed8e6 [Andres Perez] Add Shortest-path computations to graphx.lib with unit tests.
      9535f404
  4. Jun 01, 2014
    • Patrick Wendell's avatar
      Better explanation for how to use MIMA excludes. · d17d2214
      Patrick Wendell authored
      This patch does a few things:
      1. We have a file MimaExcludes.scala exclusively for excludes.
      2. The test runner tells users about that file if a test fails.
      3. I've added back the excludes used from 0.9->1.0. We should keep
         these in the project as an official audit trail of times where
         we decided to make exceptions.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #937 from pwendell/mima and squashes the following commits:
      
      7ee0db2 [Patrick Wendell] Better explanation for how to use MIMA excludes.
      d17d2214
    • Reynold Xin's avatar
      Made spark_ec2.py PEP8 compliant. · eea3aab4
      Reynold Xin authored
      The change set is actually pretty small -- mostly whitespace changes. Admittedly this is a scary change due to the lack of tests to cover the ec2 scripts, and also because indentation actually impacts control flow in Python ...
      
      Look at changes without whitespace diff here: https://github.com/apache/spark/pull/891/files?w=1
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #891 from rxin/spark-ec2-pep8 and squashes the following commits:
      
      ac1bf11 [Reynold Xin] Made spark_ec2.py PEP8 compliant.
      eea3aab4
  5. May 31, 2014
    • Yadid Ayzenberg's avatar
      updated java code blocks in spark SQL guide such that ctx will refer to ... · 366c0c4c
      Yadid Ayzenberg authored
      ...a JavaSparkContext and sqlCtx will refer to a JavaSQLContext
      
      Author: Yadid Ayzenberg <yadid@media.mit.edu>
      
      Closes #932 from yadid/master and squashes the following commits:
      
      f92fb3a [Yadid Ayzenberg] updated java code blocks in spark SQL guide such that ctx will refer to a JavaSparkContext and sqlCtx will refer to a JavaSQLContext
      366c0c4c
    • Uri Laserson's avatar
      SPARK-1917: fix PySpark import of scipy.special functions · 5e98967b
      Uri Laserson authored
      https://issues.apache.org/jira/browse/SPARK-1917
      
      Author: Uri Laserson <laserson@cloudera.com>
      
      Closes #866 from laserson/SPARK-1917 and squashes the following commits:
      
      d947e8c [Uri Laserson] Added test for scipy.special importing
      1798bbd [Uri Laserson] SPARK-1917: fix PySpark import of scipy.special
      5e98967b
    • witgo's avatar
      Improve maven plugin configuration · d8c005d5
      witgo authored
      Author: witgo <witgo@qq.com>
      
      Closes #786 from witgo/maven_plugin and squashes the following commits:
      
      5de86a2 [witgo] Merge branch 'master' of https://github.com/apache/spark into maven_plugin
      c35ef73 [witgo] Improve maven plugin configuration
      d8c005d5
    • Aaron Davidson's avatar
      SPARK-1839: PySpark RDD#take() shouldn't always read from driver · 9909efc1
      Aaron Davidson authored
      This patch simply ports over the Scala implementation of RDD#take(), which reads the first partition at the driver, then decides how many more partitions it needs to read and will possibly start a real job if it's more than 1. (Note that SparkContext#runJob(allowLocal=true) only runs the job locally if there's 1 partition selected and no parent stages.)
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #922 from aarondav/take and squashes the following commits:
      
      fa06df9 [Aaron Davidson] SPARK-1839: PySpark RDD#take() shouldn't always read from driver
      9909efc1
    • Aaron Davidson's avatar
      Super minor: Close inputStream in SparkSubmitArguments · 7d52777e
      Aaron Davidson authored
      `Properties#load()` doesn't close the InputStream, but it'd be closed after being GC'd anyway...
      
      Also changed file.getName to file, because getName only shows the filename. This will show the full (possibly relative) path, which is less confusing if it's not found.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #914 from aarondav/tiny and squashes the following commits:
      
      db9d072 [Aaron Davidson] Super minor: Close inputStream in SparkSubmitArguments
      7d52777e
    • Michael Armbrust's avatar
      [SQL] SPARK-1964 Add timestamp to hive metastore type parser. · 1a0da0ec
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #913 from marmbrus/timestampMetastore and squashes the following commits:
      
      8e0154f [Michael Armbrust] Add timestamp to hive metastore type parser.
      1a0da0ec
    • Michael Armbrust's avatar
      Optionally include Hive as a dependency of the REPL. · 7463cd24
      Michael Armbrust authored
      Due to the way spark-shell launches from an assembly jar, I don't think this change will affect anyone who isn't trying to launch the shell directly from sbt.  That said, it is kinda nice to be able to launch all things directly from SBT when developing.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #801 from marmbrus/hiveRepl and squashes the following commits:
      
      9570571 [Michael Armbrust] Optionally include Hive as a dependency of the REPL.
      7463cd24
    • Takuya UESHIN's avatar
      [SPARK-1947] [SQL] Child of SumDistinct or Average should be widened to... · 3ce81494
      Takuya UESHIN authored
      [SPARK-1947] [SQL] Child of SumDistinct or Average should be widened to prevent overflows the same as Sum.
      
      Child of `SumDistinct` or `Average` should be widened to prevent overflows the same as `Sum`.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #902 from ueshin/issues/SPARK-1947 and squashes the following commits:
      
      99c3dcb [Takuya UESHIN] Insert Cast for SumDistinct and Average.
      3ce81494
    • Chen Chao's avatar
      correct tiny comment error · 9ecc40d3
      Chen Chao authored
      Author: Chen Chao <crazyjvm@gmail.com>
      
      Closes #928 from CrazyJvm/patch-8 and squashes the following commits:
      
      144328b [Chen Chao] correct tiny comment error
      9ecc40d3
    • Cheng Lian's avatar
      [SPARK-1959] String "NULL" shouldn't be interpreted as null value · cf989601
      Cheng Lian authored
      JIRA issue: [SPARK-1959](https://issues.apache.org/jira/browse/SPARK-1959)
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #909 from liancheng/spark-1959 and squashes the following commits:
      
      306659c [Cheng Lian] [SPARK-1959] String "NULL" shouldn't be interpreted as null value
      cf989601
    • CodingCat's avatar
      SPARK-1976: fix the misleading part in streaming docs · 41bfdda3
      CodingCat authored
      Spark streaming requires at least two working threads, but the document gives the example like
      
      import org.apache.spark.api.java.function._
      import org.apache.spark.streaming._
      import org.apache.spark.streaming.api._
      // Create a StreamingContext with a local master
      val ssc = new StreamingContext("local", "NetworkWordCount", Seconds(1))
      http://spark.apache.org/docs/latest/streaming-programming-guide.html
      
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #924 from CodingCat/master and squashes the following commits:
      
      bb89f20 [CodingCat] update streaming docs
      41bfdda3
    • nchammas's avatar
      updated link to mailing list · 23ae3663
      nchammas authored
      Author: nchammas <nicholas.chammas@gmail.com>
      
      Closes #923 from nchammas/patch-1 and squashes the following commits:
      
      65c4d18 [nchammas] updated link to mailing list
      23ae3663
    • Andrew Ash's avatar
      Typo: and -> an · 9c1f204d
      Andrew Ash authored
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #927 from ash211/patch-5 and squashes the following commits:
      
      79b577d [Andrew Ash] Typo: and -> an
      9c1f204d
  6. May 30, 2014
    • Zhen Peng's avatar
      [SPARK-1901] worker should make sure executor has exited before updating executor's info · ff562b23
      Zhen Peng authored
      https://issues.apache.org/jira/browse/SPARK-1901
      
      Author: Zhen Peng <zhenpeng01@baidu.com>
      
      Closes #854 from zhpengg/bugfix-worker-kills-executor and squashes the following commits:
      
      21d380b [Zhen Peng] add some error messages
      506cea6 [Zhen Peng] add some docs for killProcess()
      a0b9860 [Zhen Peng] [SPARK-1901] worker should make sure executor has exited before updating executor's info
      ff562b23
    • Prashant Sharma's avatar
      [SPARK-1971] Update MIMA to compare against Spark 1.0.0 · 79fa8fd4
      Prashant Sharma authored
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #910 from ScrapCodes/enable-mima/spark-core and squashes the following commits:
      
      79f3687 [Prashant Sharma] updated Mima to check against version 1.0
      1e8969c [Prashant Sharma] Spark core missed out on Mima settings. So in effect we never tested spark core for mima related errors.
      79fa8fd4
    • Matei Zaharia's avatar
      [SPARK-1566] consolidate programming guide, and general doc updates · c8bf4131
      Matei Zaharia authored
      This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
      
      * A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
      * New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
      * Spark-submit guide moved to a separate page and expanded slightly
      * Various cleanups of the menu system, security docs, and others
      * Updated look of title bar to differentiate the docs from previous Spark versions
      
      You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #896 from mateiz/1.0-docs and squashes the following commits:
      
      03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
      0779508 [Matei Zaharia] tweak
      ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
      1bf4112 [Matei Zaharia] Review comments
      4414f88 [Matei Zaharia] tweaks
      d04e979 [Matei Zaharia] Fix some old links to Java guide
      a34ed33 [Matei Zaharia] tweak
      541bb3b [Matei Zaharia] miscellaneous changes
      fcefdec [Matei Zaharia] Moved submitting apps to separate doc
      61d72b4 [Matei Zaharia] stuff
      181f217 [Matei Zaharia] migration guide, remove old language guides
      e11a0da [Matei Zaharia] Add more API functions
      6a030a9 [Matei Zaharia] tweaks
      8db0ae3 [Matei Zaharia] Added key-value pairs section
      318d2c9 [Matei Zaharia] tweaks
      1c81477 [Matei Zaharia] New section on basics and function syntax
      e38f559 [Matei Zaharia] Actually added programming guide to Git
      a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
      3b6a876 [Matei Zaharia] More CSS tweaks
      01ec8bf [Matei Zaharia] More CSS tweaks
      e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
      c8bf4131
    • Prashant Sharma's avatar
      [SPARK-1820] Make GenerateMimaIgnore @DeveloperApi annotation aware. · eeee978a
      Prashant Sharma authored
      We add all the classes annotated as `DeveloperApi` to `~/.mima-excludes`.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: nikhil7sh <nikhilsharmalnmiit@gmail.ccom>
      
      Closes #904 from ScrapCodes/SPARK-1820/ignore-Developer-Api and squashes the following commits:
      
      de944f9 [Prashant Sharma] Code review.
      e3c5215 [Prashant Sharma] Incorporated patrick's suggestions and fixed the scalastyle build.
      9983a42 [nikhil7sh] [SPARK-1820] Make GenerateMimaIgnore @DeveloperApi annotation aware
      eeee978a
  7. May 29, 2014
    • Ankur Dave's avatar
      initial version of LPA · b7e28fa4
      Ankur Dave authored
      A straightforward implementation of LPA algorithm for detecting graph communities using the Pregel framework.  Amongst the growing literature on community detection algorithms in networks, LPA is perhaps the most elementary, and despite its flaws it remains a nice and simple approach.
      
      Author: Ankur Dave <ankurdave@gmail.com>
      Author: haroldsultan <haroldsultan@gmail.com>
      Author: Harold Sultan <haroldsultan@gmail.com>
      
      Closes #905 from haroldsultan/master and squashes the following commits:
      
      327aee0 [haroldsultan] Merge pull request #2 from ankurdave/label-propagation
      227a4d0 [Ankur Dave] Untabify
      0ac574c [haroldsultan] Merge pull request #1 from ankurdave/label-propagation
      0e24303 [Ankur Dave] Add LabelPropagationSuite
      84aa061 [Ankur Dave] LabelPropagation: Fix compile errors and style; rename from LPA
      9830342 [Harold Sultan] initial version of LPA
      b7e28fa4
Loading