Skip to content
Snippets Groups Projects
  1. Apr 09, 2014
    • Xiangrui Meng's avatar
      [SPARK-1390] Refactoring of matrices backed by RDDs · 9689b663
      Xiangrui Meng authored
      This is to refactor interfaces for matrices backed by RDDs. It would be better if we have a clear separation of local matrices and those backed by RDDs. Right now, we have
      
      1. `org.apache.spark.mllib.linalg.SparseMatrix`, which is a wrapper over an RDD of matrix entries, i.e., coordinate list format.
      2. `org.apache.spark.mllib.linalg.TallSkinnyDenseMatrix`, which is a wrapper over RDD[Array[Double]], i.e. row-oriented format.
      
      We will see naming collision when we introduce local `SparseMatrix`, and the name `TallSkinnyDenseMatrix` is not exact if we switch to `RDD[Vector]` from `RDD[Array[Double]]`. It would be better to have "RDD" in the class name to suggest that operations may trigger jobs.
      
      The proposed names are (all under `org.apache.spark.mllib.linalg.rdd`):
      
      1. `RDDMatrix`: trait for matrices backed by one or more RDDs
      2. `CoordinateRDDMatrix`: wrapper of `RDD[(Long, Long, Double)]`
      3. `RowRDDMatrix`: wrapper of `RDD[Vector]` whose rows do not have special ordering
      4. `IndexedRowRDDMatrix`: wrapper of `RDD[(Long, Vector)]` whose rows are associated with indices
      
      The current code also introduces local matrices.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #296 from mengxr/mat and squashes the following commits:
      
      24d8294 [Xiangrui Meng] fix for groupBy returning Iterable
      bfc2b26 [Xiangrui Meng] merge master
      8e4f1f5 [Xiangrui Meng] Merge branch 'master' into mat
      0135193 [Xiangrui Meng] address Reza's comments
      03cd7e1 [Xiangrui Meng] add pca/gram to IndexedRowMatrix add toBreeze to DistributedMatrix for test simplify tests
      b177ff1 [Xiangrui Meng] address Matei's comments
      be119fe [Xiangrui Meng] rename m/n to numRows/numCols for local matrix add tests for matrices
      b881506 [Xiangrui Meng] rename SparkPCA/SVD to TallSkinnyPCA/SVD
      e7d0d4a [Xiangrui Meng] move IndexedRDDMatrixRow to IndexedRowRDDMatrix
      0d1491c [Xiangrui Meng] fix test errors
      a85262a [Xiangrui Meng] rename RDDMatrixRow to IndexedRDDMatrixRow
      b8b6ac3 [Xiangrui Meng] Remove old code
      4cf679c [Xiangrui Meng] port pca to RowRDDMatrix, and add multiply and covariance
      7836e2f [Xiangrui Meng] initial refactoring of matrices backed by RDDs
      9689b663
    • Holden Karau's avatar
      Spark-939: allow user jars to take precedence over spark jars · fa0524fd
      Holden Karau authored
      I still need to do a small bit of re-factoring [mostly the one Java file I'll switch it back to a Scala file and use it in both the close loaders], but comments on other things I should do would be great.
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #217 from holdenk/spark-939-allow-user-jars-to-take-precedence-over-spark-jars and squashes the following commits:
      
      cf0cac9 [Holden Karau] Fix the executorclassloader
      1955232 [Holden Karau] Fix long line in TestUtils
      8f89965 [Holden Karau] Fix tests for new class name
      7546549 [Holden Karau] CR feedback, merge some of the testutils methods down, rename the classloader
      644719f [Holden Karau] User the class generator for the repl class loader tests too
      f0b7114 [Holden Karau] Fix the core/src/test/scala/org/apache/spark/executor/ExecutorURLClassLoaderSuite.scala tests
      204b199 [Holden Karau] Fix the generated classes
      9f68f10 [Holden Karau] Start rewriting the ExecutorURLClassLoaderSuite to not use the hard coded classes
      858aba2 [Holden Karau] Remove a bunch of test junk
      261aaee [Holden Karau] simplify executorurlclassloader a bit
      7a7bf5f [Holden Karau] CR feedback
      d4ae848 [Holden Karau] rewrite component into scala
      aa95083 [Holden Karau] CR feedback
      7752594 [Holden Karau] re-add https comment
      a0ef85a [Holden Karau] Fix style issues
      125ea7f [Holden Karau] Easier to just remove those files, we don't need them
      bb8d179 [Holden Karau] Fix issues with the repl class loader
      241b03d [Holden Karau] fix my rat excludes
      a343350 [Holden Karau] Update rat-excludes and remove a useless file
      d90d217 [Holden Karau] Fix fall back with custom class loader and add a test for it
      4919bf9 [Holden Karau] Fix parent calling class loader issue
      8a67302 [Holden Karau] Test are good
      9e2d236 [Holden Karau] It works comrade
      691ee00 [Holden Karau] It works ish
      dc4fe44 [Holden Karau] Does not depend on being in my home directory
      47046ff [Holden Karau] Remove bad import'
      22d83cb [Holden Karau] Add a test suite for the executor url class loader suite
      7ef4628 [Holden Karau] Clean up
      792d961 [Holden Karau] Almost works
      16aecd1 [Holden Karau] Doesn't quite work
      8d2241e [Holden Karau] Adda FakeClass for testing ClassLoader precedence options
      648b559 [Holden Karau] Both class loaders compile. Now for testing
      e1d9f71 [Holden Karau] One loader workers.
      fa0524fd
  2. Apr 08, 2014
    • Xiangrui Meng's avatar
      [SPARK-1434] [MLLIB] change labelParser from anonymous function to trait · b9e0c937
      Xiangrui Meng authored
      This is a patch to address @mateiz 's comment in https://github.com/apache/spark/pull/245
      
      MLUtils#loadLibSVMData uses an anonymous function for the label parser. Java users won't like it. So I make a trait for LabelParser and provide two implementations: binary and multiclass.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #345 from mengxr/label-parser and squashes the following commits:
      
      ac44409 [Xiangrui Meng] use singleton objects for label parsers
      3b1a7c6 [Xiangrui Meng] add tests for label parsers
      c2e571c [Xiangrui Meng] rename LabelParser.apply to LabelParser.parse use extends for singleton
      11c94e0 [Xiangrui Meng] add return types
      7f8eb36 [Xiangrui Meng] change labelParser from annoymous function to trait
      b9e0c937
    • Holden Karau's avatar
      Spark 1271: Co-Group and Group-By should pass Iterable[X] · ce8ec545
      Holden Karau authored
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #242 from holdenk/spark-1320-cogroupandgroupshouldpassiterator and squashes the following commits:
      
      f289536 [Holden Karau] Fix bad merge, should have been Iterable rather than Iterator
      77048f8 [Holden Karau] Fix merge up to master
      d3fe909 [Holden Karau] use toSeq instead
      7a092a3 [Holden Karau] switch resultitr to resultiterable
      eb06216 [Holden Karau] maybe I should have had a coffee first. use correct import for guava iterables
      c5075aa [Holden Karau] If guava 14 had iterables
      2d06e10 [Holden Karau] Fix Java 8 cogroup tests for the new API
      11e730c [Holden Karau] Fix streaming tests
      66b583d [Holden Karau] Fix the core test suite to compile
      4ed579b [Holden Karau] Refactor from iterator to iterable
      d052c07 [Holden Karau] Python tests now pass with iterator pandas
      3bcd81d [Holden Karau] Revert "Try and make pickling list iterators work"
      cd1e81c [Holden Karau] Try and make pickling list iterators work
      c60233a [Holden Karau] Start investigating moving to iterators for python API like the Java/Scala one. tl;dr: We will have to write our own iterator since the default one doesn't pickle well
      88a5cef [Holden Karau] Fix cogroup test in JavaAPISuite for streaming
      a5ee714 [Holden Karau] oops, was checking wrong iterator
      e687f21 [Holden Karau] Fix groupbykey test in JavaAPISuite of streaming
      ec8cc3e [Holden Karau] Fix test issues\!
      4b0eeb9 [Holden Karau] Switch cast in PairDStreamFunctions
      fa395c9 [Holden Karau] Revert "Add a join based on the problem in SVD"
      ec99e32 [Holden Karau] Revert "Revert this but for now put things in list pandas"
      b692868 [Holden Karau] Revert
      7e533f7 [Holden Karau] Fix the bug
      8a5153a [Holden Karau] Revert me, but we have some stuff to debug
      b4e86a9 [Holden Karau] Add a join based on the problem in SVD
      c4510e2 [Holden Karau] Revert this but for now put things in list pandas
      b4e0b1d [Holden Karau] Fix style issues
      71e8b9f [Holden Karau] I really need to stop calling size on iterators, it is the path of sadness.
      b1ae51a [Holden Karau] Fix some of the types in the streaming JavaAPI suite. Probably still needs more work
      37888ec [Holden Karau] core/tests now pass
      249abde [Holden Karau] org.apache.spark.rdd.PairRDDFunctionsSuite passes
      6698186 [Holden Karau] Revert "I think this might be a bad rabbit hole. Started work to make CoGroupedRDD use iterator and then went crazy"
      fe992fe [Holden Karau] hmmm try and fix up basic operation suite
      172705c [Holden Karau] Fix Java API suite
      caafa63 [Holden Karau] I think this might be a bad rabbit hole. Started work to make CoGroupedRDD use iterator and then went crazy
      88b3329 [Holden Karau] Fix groupbykey to actually give back an iterator
      4991af6 [Holden Karau] Fix some tests
      be50246 [Holden Karau] Calling size on an iterator is not so good if we want to use it after
      687ffbc [Holden Karau] This is the it compiles point of replacing Seq with Iterator and JList with JIterator in the groupby and cogroup signatures
      ce8ec545
    • Sandeep's avatar
      SPARK-1433: Upgrade Mesos dependency to 0.17.0 · 12c077d5
      Sandeep authored
      Mesos 0.13.0 was released 6 months ago.
      Upgrade Mesos dependency to 0.17.0
      
      Author: Sandeep <sandeep@techaddict.me>
      
      Closes #355 from techaddict/mesos_update and squashes the following commits:
      
      f1abeee [Sandeep] SPARK-1433: Upgrade Mesos dependency to 0.17.0 Mesos 0.13.0 was released 6 months ago. Upgrade Mesos dependency to 0.17.0
      12c077d5
    • Kay Ousterhout's avatar
      [SPARK-1397] Notify SparkListeners when stages fail or are cancelled. · fac6085c
      Kay Ousterhout authored
      [I wanted to post this for folks to comment but it depends on (and thus includes the changes in) a currently outstanding PR, #305.  You can look at just the second commit: https://github.com/kayousterhout/spark-1/commit/93f08baf731b9eaf5c9792a5373560526e2bccac to see just the changes relevant to this PR]
      
      Previously, when stages fail or get cancelled, the SparkListener is only notified
      indirectly through the SparkListenerJobEnd, where we sometimes pass in a single
      stage that failed.  This worked before job cancellation, because jobs would only fail
      due to a single stage failure.  However, with job cancellation, multiple running stages
      can fail when a job gets cancelled.  Right now, this is not handled correctly, which
      results in stages that get stuck in the “Running Stages” window in the UI even
      though they’re dead.
      
      This PR changes the SparkListenerStageCompleted event to a SparkListenerStageEnded
      event, and uses this event to tell SparkListeners when stages fail in addition to when
      they complete successfully.  This change is NOT publicly backward compatible for two
      reasons.  First, it changes the SparkListener interface.  We could alternately add a new event,
      SparkListenerStageFailed, and keep the existing SparkListenerStageCompleted.  However,
      this is less consistent with the listener events for tasks / jobs ending, and will result in some
      code duplication for listeners (because failed and completed stages are handled in similar
      ways).  Note that I haven’t finished updating the JSON code to correctly handle the new event
      because I’m waiting for feedback on whether this is a good or bad idea (hence the “WIP”).
      
      It is also not backwards compatible because it changes the publicly visible JobWaiter.jobFailed()
      method to no longer include a stage that caused the failure.  I think this change should definitely
      stay, because with cancellation (as described above), a failure isn’t necessarily caused by a
      single stage.
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #309 from kayousterhout/stage_cancellation and squashes the following commits:
      
      5533ecd [Kay Ousterhout] Fixes in response to Mark's review
      320c7c7 [Kay Ousterhout] Notify SparkListeners when stages fail or are cancelled.
      fac6085c
    • Aaron Davidson's avatar
      SPARK-1445: compute-classpath should not print error if lib_managed not found · e25b5934
      Aaron Davidson authored
      This was added to the check for the assembly jar, forgot it for the datanucleus jars.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #361 from aarondav/cc and squashes the following commits:
      
      8facc16 [Aaron Davidson] SPARK-1445: compute-classpath should not print error if lib_managed not found
      e25b5934
    • Kan Zhang's avatar
      SPARK-1348 binding Master, Worker, and App Web UI to all interfaces · a8d86b08
      Kan Zhang authored
      Author: Kan Zhang <kzhang@apache.org>
      
      Closes #318 from kanzhang/SPARK-1348 and squashes the following commits:
      
      e625a5f [Kan Zhang] reverting the changes to startJettyServer()
      7a8084e [Kan Zhang] SPARK-1348 binding Master, Worker, and App Web UI to all interfaces
      a8d86b08
    • Henry Saputra's avatar
      Remove extra semicolon in import statement and unused import in ApplicationMaster · 3bc05489
      Henry Saputra authored
      Small nit cleanup to remove extra semicolon and unused import in Yarn's stable ApplicationMaster (it bothers me every time I saw it)
      
      Author: Henry Saputra <hsaputra@apache.org>
      
      Closes #358 from hsaputra/nitcleanup_removesemicolon_import_applicationmaster and squashes the following commits:
      
      bffb685 [Henry Saputra] Remove extra semicolon in import statement and unused import in ApplicationMaster.scala
      3bc05489
    • Kay Ousterhout's avatar
      [SPARK-1396] Properly cleanup DAGScheduler on job cancellation. · 6dc5f584
      Kay Ousterhout authored
      Previously, when jobs were cancelled, not all of the state in the
      DAGScheduler was cleaned up, leading to a slow memory leak in the
      DAGScheduler.  As we expose easier ways to cancel jobs, it's more
      important to fix these issues.
      
      This commit also fixes a second and less serious problem, which is that
      previously, when a stage failed, not all of the appropriate stages
      were cancelled.  See the "failure of stage used by two jobs" test
      for an example of this.  This just meant that extra work was done, and is
      not a correctness problem.
      
      This commit adds 3 tests.  “run shuffle with map stage failure” is
      a new test to more thoroughly test this functionality, and passes on
      both the old and new versions of the code.  “trivial job
      cancellation” fails on the old code because all state wasn’t cleaned
      up correctly when jobs were cancelled (we didn’t remove the job from
      resultStageToJob).  “failure of stage used by two jobs” fails on the
      old code because taskScheduler.cancelTasks wasn’t called for one of
      the stages (see test comments).
      
      This should be checked in before #246, which makes it easier to
      cancel stages / jobs.
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #305 from kayousterhout/incremental_abort_fix and squashes the following commits:
      
      f33d844 [Kay Ousterhout] Mark review comments
      9217080 [Kay Ousterhout] Properly cleanup DAGScheduler on job cancellation.
      6dc5f584
    • Tathagata Das's avatar
      [SPARK-1331] Added graceful shutdown to Spark Streaming · 83ac9a4b
      Tathagata Das authored
      Current version of StreamingContext.stop() directly kills all the data receivers (NetworkReceiver) without waiting for the data already received to be persisted and processed. This PR provides the fix. Now, when the StreamingContext.stop() is called, the following sequence of steps will happen.
      1. The driver will send a stop signal to all the active receivers.
      2. Each receiver, when it gets a stop signal from the driver, first stop receiving more data, then waits for the thread that persists data blocks to BlockManager to finish persisting all receive data, and finally quits.
      3. After all the receivers have stopped, the driver will wait for the Job Generator and Job Scheduler to finish processing all the received data.
      
      It also fixes the semantics of StreamingContext.start and stop. It will throw appropriate errors and warnings if stop() is called before start(), stop() is called twice, etc.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #247 from tdas/graceful-shutdown and squashes the following commits:
      
      61c0016 [Tathagata Das] Updated MIMA binary check excludes.
      ae1d39b [Tathagata Das] Merge remote-tracking branch 'apache-github/master' into graceful-shutdown
      6b59cfc [Tathagata Das] Minor changes based on Andrew's comment on PR.
      d0b8d65 [Tathagata Das] Reduced time taken by graceful shutdown unit test.
      f55bc67 [Tathagata Das] Fix scalastyle
      c69b3a7 [Tathagata Das] Updates based on Patrick's comments.
      c43b8ae [Tathagata Das] Added graceful shutdown to Spark Streaming.
      83ac9a4b
    • Tathagata Das's avatar
      [SPARK-1103] Automatic garbage collection of RDD, shuffle and broadcast data · 11eabbe1
      Tathagata Das authored
      This PR allows Spark to automatically cleanup metadata and data related to persisted RDDs, shuffles and broadcast variables when the corresponding RDDs, shuffles and broadcast variables fall out of scope from the driver program. This is still a work in progress as broadcast cleanup has not been implemented.
      
      **Implementation Details**
      A new class `ContextCleaner` is responsible cleaning all the state. It is instantiated as part of a `SparkContext`. RDD and ShuffleDependency classes have overridden `finalize()` function that gets called whenever their instances go out of scope. The `finalize()` function enqueues the object’s identifier (i.e. RDD ID, shuffle ID, etc.) with the `ContextCleaner`, which is a very short and cheap operation and should not significantly affect the garbage collection mechanism. The `ContextCleaner`, on a different thread, performs the cleanup, whose details are given below.
      
      *RDD cleanup:*
      `ContextCleaner` calls `RDD.unpersist()` is used to cleanup persisted RDDs. Regarding metadata, the DAGScheduler automatically cleans up all metadata related to a RDD after all jobs have completed. Only the `SparkContext.persistentRDDs` keeps strong references to persisted RDDs. The `TimeStampedHashMap` used for that has been replaced by `TimeStampedWeakValueHashMap` that keeps only weak references to the RDDs, allowing them to be garbage collected.
      
      *Shuffle cleanup:*
      New BlockManager message `RemoveShuffle(<shuffle ID>)` asks the `BlockManagerMaster` and currently active `BlockManager`s to delete all the disk blocks related to the shuffle ID. `ContextCleaner` cleans up shuffle data using this message and also cleans up the metadata in the `MapOutputTracker` of the driver. The `MapOutputTracker` at the workers, that caches the shuffle metadata, maintains a `BoundedHashMap` to limit the shuffle information it caches. Refetching the shuffle information from the driver is not too costly.
      
      *Broadcast cleanup:*
      To be done. [This PR](https://github.com/apache/incubator-spark/pull/543/) adds mechanism for explicit cleanup of broadcast variables. `Broadcast.finalize()` will enqueue its own ID with ContextCleaner and the PRs mechanism will be used to unpersist the Broadcast data.
      
      *Other cleanup:*
      `ShuffleMapTask` and `ResultTask` caches tasks and used TTL based cleanup (using `TimeStampedHashMap`), so nothing got cleaned up if TTL was not set. Instead, they now use `BoundedHashMap` to keep a limited number of map output information. Cost of repopulating the cache if necessary is very small.
      
      **Current state of implementation**
      Implemented RDD and shuffle cleanup. Things left to be done are.
      - Cleaning up for broadcast variable still to be done.
      - Automatic cleaning up keys with empty weak refs as values in `TimeStampedWeakValueHashMap`
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Author: Andrew Or <andrewor14@gmail.com>
      Author: Roman Pastukhov <ignatich@mail.ru>
      
      Closes #126 from tdas/state-cleanup and squashes the following commits:
      
      61b8d6e [Tathagata Das] Fixed issue with Tachyon + new BlockManager methods.
      f489fdc [Tathagata Das] Merge remote-tracking branch 'apache/master' into state-cleanup
      d25a86e [Tathagata Das] Fixed stupid typo.
      cff023c [Tathagata Das] Fixed issues based on Andrew's comments.
      4d05314 [Tathagata Das] Scala style fix.
      2b95b5e [Tathagata Das] Added more documentation on Broadcast implementations, specially which blocks are told about to the driver. Also, fixed Broadcast API to hide destroy functionality.
      41c9ece [Tathagata Das] Added more unit tests for BlockManager, DiskBlockManager, and ContextCleaner.
      6222697 [Tathagata Das] Fixed bug and adding unit test for removeBroadcast in BlockManagerSuite.
      104a89a [Tathagata Das] Fixed failing BroadcastSuite unit tests by introducing blocking for removeShuffle and removeBroadcast in BlockManager*
      a430f06 [Tathagata Das] Fixed compilation errors.
      b27f8e8 [Tathagata Das] Merge pull request #3 from andrewor14/cleanup
      cd72d19 [Andrew Or] Make automatic cleanup configurable (not documented)
      ada45f0 [Andrew Or] Merge branch 'state-cleanup' of github.com:tdas/spark into cleanup
      a2cc8bc [Tathagata Das] Merge remote-tracking branch 'apache/master' into state-cleanup
      c5b1d98 [Andrew Or] Address Patrick's comments
      a6460d4 [Andrew Or] Merge github.com:apache/spark into cleanup
      762a4d8 [Tathagata Das] Merge pull request #1 from andrewor14/cleanup
      f0aabb1 [Andrew Or] Correct semantics for TimeStampedWeakValueHashMap + add tests
      5016375 [Andrew Or] Address TD's comments
      7ed72fb [Andrew Or] Fix style test fail + remove verbose test message regarding broadcast
      634a097 [Andrew Or] Merge branch 'state-cleanup' of github.com:tdas/spark into cleanup
      7edbc98 [Tathagata Das] Merge remote-tracking branch 'apache-github/master' into state-cleanup
      8557c12 [Andrew Or] Merge github.com:apache/spark into cleanup
      e442246 [Andrew Or] Merge github.com:apache/spark into cleanup
      88904a3 [Andrew Or] Make TimeStampedWeakValueHashMap a wrapper of TimeStampedHashMap
      fbfeec8 [Andrew Or] Add functionality to query executors for their local BlockStatuses
      34f436f [Andrew Or] Generalize BroadcastBlockId to remove BroadcastHelperBlockId
      0d17060 [Andrew Or] Import, comments, and style fixes (minor)
      c92e4d9 [Andrew Or] Merge github.com:apache/spark into cleanup
      f201a8d [Andrew Or] Test broadcast cleanup in ContextCleanerSuite + remove BoundedHashMap
      e95479c [Andrew Or] Add tests for unpersisting broadcast
      544ac86 [Andrew Or] Clean up broadcast blocks through BlockManager*
      d0edef3 [Andrew Or] Add framework for broadcast cleanup
      ba52e00 [Andrew Or] Refactor broadcast classes
      c7ccef1 [Andrew Or] Merge branch 'bc-unpersist-merge' of github.com:ignatich/incubator-spark into cleanup
      6c9dcf6 [Tathagata Das] Added missing Apache license
      d2f8b97 [Tathagata Das] Removed duplicate unpersistRDD.
      a007307 [Tathagata Das] Merge remote-tracking branch 'apache/master' into state-cleanup
      620eca3 [Tathagata Das] Changes based on PR comments.
      f2881fd [Tathagata Das] Changed ContextCleaner to use ReferenceQueue instead of finalizer
      e1fba5f [Tathagata Das] Style fix
      892b952 [Tathagata Das] Removed use of BoundedHashMap, and made BlockManagerSlaveActor cleanup shuffle metadata in MapOutputTrackerWorker.
      a7260d3 [Tathagata Das] Added try-catch in context cleaner and null value cleaning in TimeStampedWeakValueHashMap.
      e61daa0 [Tathagata Das] Modifications based on the comments on PR 126.
      ae9da88 [Tathagata Das] Removed unncessary TimeStampedHashMap from DAGScheduler, added try-catches in finalize() methods, and replaced ArrayBlockingQueue to LinkedBlockingQueue to avoid blocking in Java's finalizing thread.
      cb0a5a6 [Tathagata Das] Fixed docs and styles.
      a24fefc [Tathagata Das] Merge remote-tracking branch 'apache/master' into state-cleanup
      8512612 [Tathagata Das] Changed TimeStampedHashMap to use WrappedJavaHashMap.
      e427a9e [Tathagata Das] Added ContextCleaner to automatically clean RDDs and shuffles when they fall out of scope. Also replaced TimeStampedHashMap to BoundedHashMaps and TimeStampedWeakValueHashMap for the necessary hashmap behavior.
      80dd977 [Roman Pastukhov] Fix for Broadcast unpersist patch.
      1e752f1 [Roman Pastukhov] Added unpersist method to Broadcast.
      11eabbe1
    • Cheng Lian's avatar
      [SPARK-1402] Added 3 more compression schemes · 0d0493fc
      Cheng Lian authored
      JIRA issue: [SPARK-1402](https://issues.apache.org/jira/browse/SPARK-1402)
      
      This PR provides 3 more compression schemes for Spark SQL in-memory columnar storage:
      
      * `BooleanBitSet`
      * `IntDelta`
      * `LongDelta`
      
      Now there are 6 compression schemes in total, including the no-op `PassThrough` scheme.
      
      Also fixed a bug in PR #286: not all compression schemes are added as available schemes when accessing an in-memory column, and when a column is compressed with an unrecognised scheme, `ColumnAccessor` throws exception.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #330 from liancheng/moreCompressionSchemes and squashes the following commits:
      
      1d037b8 [Cheng Lian] Fixed SPARK-1436: in-memory column byte buffer must be able to be accessed multiple times
      d7c0e8f [Cheng Lian] Added test suite for IntegralDelta (IntDelta & LongDelta)
      3c1ad7a [Cheng Lian] Added test suite for BooleanBitSet, refactored other test suites
      44fe4b2 [Cheng Lian] Refactored CompressionScheme, added 3 more compression schemes.
      0d0493fc
  3. Apr 07, 2014
    • Reynold Xin's avatar
      Change timestamp cast semantics. When cast to numeric types, return the unix... · f27e56aa
      Reynold Xin authored
      Change timestamp cast semantics. When cast to numeric types, return the unix time in seconds (instead of millis).
      
      @marmbrus @chenghao-intel
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #352 from rxin/timestamp-cast and squashes the following commits:
      
      18aacd3 [Reynold Xin] Fixed precision for double.
      2adb235 [Reynold Xin] Change timestamp cast semantics. When cast to numeric types, return the unix time in seconds (instead of millis).
      f27e56aa
    • Reynold Xin's avatar
      Added eval for Rand (without any support for user-defined seed). · 31e6fff0
      Reynold Xin authored
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #349 from rxin/rand and squashes the following commits:
      
      fd11322 [Reynold Xin] Added eval for Rand (without any support for user-defined seed).
      31e6fff0
    • Reynold Xin's avatar
      Removed the default eval implementation from Expression, and added a bunch of... · 55dfd5dc
      Reynold Xin authored
      Removed the default eval implementation from Expression, and added a bunch of override's in classes I touched.
      
      It is more robust to not provide a default implementation for Expression's.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #350 from rxin/eval-default and squashes the following commits:
      
      0a83b8f [Reynold Xin] Removed the default eval implementation from Expression, and added a bunch of override's in classes I touched.
      55dfd5dc
    • Reynold Xin's avatar
      [sql] Rename execution/aggregates.scala Aggregate.scala, and added a bunch of... · 14c9238a
      Reynold Xin authored
      [sql] Rename execution/aggregates.scala Aggregate.scala, and added a bunch of private[this] to variables.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #348 from rxin/aggregate and squashes the following commits:
      
      f4bc36f [Reynold Xin] Rename execution/aggregates.scala Aggregate.scala, and added a bunch of private[this] to variables.
      14c9238a
    • Aaron Davidson's avatar
      SPARK-1099: Introduce local[*] mode to infer number of cores · 0307db0f
      Aaron Davidson authored
      This is the default mode for running spark-shell and pyspark, intended to allow users running spark for the first time to see the performance benefits of using multiple cores, while not breaking backwards compatibility for users who use "local" mode and expect exactly 1 core.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #182 from aarondav/110 and squashes the following commits:
      
      a88294c [Aaron Davidson] Rebased changes for new spark-shell
      a9f393e [Aaron Davidson] SPARK-1099: Introduce local[*] mode to infer number of cores
      0307db0f
    • Patrick Wendell's avatar
      HOTFIX: Disable actor input stream test. · 2a2ca48b
      Patrick Wendell authored
      This test makes incorrect assumptions about the behavior of Thread.sleep().
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #347 from pwendell/stream-tests and squashes the following commits:
      
      10e09e0 [Patrick Wendell] HOTFIX: Disable actor input stream.
      2a2ca48b
    • Sandy Ryza's avatar
      SPARK-1252. On YARN, use container-log4j.properties for executors · 9dd8b916
      Sandy Ryza authored
      container-log4j.properties is a file that YARN provides so that containers can have log4j.properties distinct from that of the NodeManagers.
      
      Logs now go to syslog, and stderr and stdout just have the process's standard err and standard out.
      
      I tested this on pseudo-distributed clusters for both yarn (Hadoop 2.2) and yarn-alpha (Hadoop 0.23.7)/
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #148 from sryza/sandy-spark-1252 and squashes the following commits:
      
      c0043b8 [Sandy Ryza] Put log4j.properties file under common
      55823da [Sandy Ryza] Add license headers to new files
      10934b8 [Sandy Ryza] Add log4j-spark-container.properties and support SPARK_LOG4J_CONF
      e74450b [Sandy Ryza] SPARK-1252. On YARN, use container-log4j.properties for executors
      9dd8b916
    • Reynold Xin's avatar
      [sql] Rename Expression.apply to eval for better readability. · 83f2a2f1
      Reynold Xin authored
      Also used this opportunity to add a bunch of override's and made some members private.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #340 from rxin/eval and squashes the following commits:
      
      a7c7ca7 [Reynold Xin] Fixed conflicts in merge.
      9069de6 [Reynold Xin] Merge branch 'master' into eval
      3ccc313 [Reynold Xin] Merge branch 'master' into eval
      1a47e10 [Reynold Xin] Renamed apply to eval for generators and added a bunch of override's.
      ea061de [Reynold Xin] Rename Expression.apply to eval for better readability.
      83f2a2f1
    • Davis Shepherd's avatar
      SPARK-1432: Make sure that all metadata fields are properly cleaned · a3c51c6e
      Davis Shepherd authored
      While working on spark-1337 with @pwendell, we noticed that not all of the metadata maps in JobProgessListener were being properly cleaned. This could lead to a (hypothetical) memory leak issue should a job run long enough. This patch aims to address the issue.
      
      Author: Davis Shepherd <davis@conviva.com>
      
      Closes #338 from dgshep/master and squashes the following commits:
      
      a77b65c [Davis Shepherd] In the contex of SPARK-1337: Make sure that all metadata fields are properly cleaned
      a3c51c6e
    • Michael Armbrust's avatar
      [SQL] SPARK-1427 Fix toString for SchemaRDD NativeCommands. · b5bae849
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #343 from marmbrus/toStringFix and squashes the following commits:
      
      37198fe [Michael Armbrust] Fix toString for SchemaRDD NativeCommands.
      b5bae849
    • Michael Armbrust's avatar
      [SQL] SPARK-1371 Hash Aggregation Improvements · accd0999
      Michael Armbrust authored
      Given:
      ```scala
      case class Data(a: Int, b: Int)
      val rdd =
        sparkContext
          .parallelize(1 to 200)
          .flatMap(_ => (1 to 50000).map(i => Data(i % 100, i)))
      rdd.registerAsTable("data")
      cacheTable("data")
      ```
      Before:
      ```
      SELECT COUNT(*) FROM data:[10000000]
      16795.567ms
      SELECT a, SUM(b) FROM data GROUP BY a
      7536.436ms
      SELECT SUM(b) FROM data
      10954.1ms
      ```
      
      After:
      ```
      SELECT COUNT(*) FROM data:[10000000]
      1372.175ms
      SELECT a, SUM(b) FROM data GROUP BY a
      2070.446ms
      SELECT SUM(b) FROM data
      958.969ms
      ```
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #295 from marmbrus/hashAgg and squashes the following commits:
      
      ec63575 [Michael Armbrust] Add comment.
      d0495a9 [Michael Armbrust] Use scaladoc instead.
      b4a6887 [Michael Armbrust] Address review comments.
      a2d90ba [Michael Armbrust] Capture child output statically to avoid issues with generators and serialization.
      7c13112 [Michael Armbrust] Rewrite Aggregate operator to stream input and use projections.  Remove unused local RDD functions implicits.
      5096f99 [Michael Armbrust] Make HiveUDAF fields transient since object inspectors are not serializable.
      6a4b671 [Michael Armbrust] Add option to avoid binding operators expressions automatically.
      92cca08 [Michael Armbrust] Always include serialization debug info when running tests.
      1279df2 [Michael Armbrust] Increase default number of partitions.
      accd0999
  4. Apr 06, 2014
    • Patrick Wendell's avatar
      SPARK-1431: Allow merging conflicting pull requests · 87d0928a
      Patrick Wendell authored
      Sometimes if there is a small conflict it's nice to be able to just
      manually fix it up rather than have another RTT with the contributor.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #342 from pwendell/merge-conflicts and squashes the following commits:
      
      cdce61a [Patrick Wendell] SPARK-1431: Allow merging conflicting pull requests
      87d0928a
    • Evan Chan's avatar
      SPARK-1154: Clean up app folders in worker nodes · 1440154c
      Evan Chan authored
      This is a fix for [SPARK-1154](https://issues.apache.org/jira/browse/SPARK-1154).   The issue is that worker nodes fill up with a huge number of app-* folders after some time.  This change adds a periodic cleanup task which asynchronously deletes app directories older than a configurable TTL.
      
      Two new configuration parameters have been introduced:
        spark.worker.cleanup_interval
        spark.worker.app_data_ttl
      
      This change does not include moving the downloads of application jars to a location outside of the work directory.  We will address that if we have time, but that potentially involves caching so it will come either as part of this PR or a separate PR.
      
      Author: Evan Chan <ev@ooyala.com>
      Author: Kelvin Chu <kelvinkwchu@yahoo.com>
      
      Closes #288 from velvia/SPARK-1154-cleanup-app-folders and squashes the following commits:
      
      0689995 [Evan Chan] CR from @aarondav - move config, clarify for standalone mode
      9f10d96 [Evan Chan] CR from @pwendell - rename configs and add cleanup.enabled
      f2f6027 [Evan Chan] CR from @andrewor14
      553d8c2 [Kelvin Chu] change the variable name to currentTimeMillis since it actually tracks in seconds
      8dc9cb5 [Kelvin Chu] Fixed a bug in Utils.findOldFiles() after merge.
      cb52f2b [Kelvin Chu] Change the name of findOldestFiles() to findOldFiles()
      72f7d2d [Kelvin Chu] Fix a bug of Utils.findOldestFiles(). file.lastModified is returned in milliseconds.
      ad99955 [Kelvin Chu] Add unit test for Utils.findOldestFiles()
      dc1a311 [Evan Chan] Don't recompute current time with every new file
      e3c408e [Evan Chan] Document the two new settings
      b92752b [Evan Chan] SPARK-1154: Add a periodic task to clean up app directories
      1440154c
    • Aaron Davidson's avatar
      SPARK-1314: Use SPARK_HIVE to determine if we include Hive in packaging · 41065584
      Aaron Davidson authored
      Previously, we based our decision regarding including datanucleus jars based on the existence of a spark-hive-assembly jar, which was incidentally built whenever "sbt assembly" is run. This means that a typical and previously supported pathway would start using hive jars.
      
      This patch has the following features/bug fixes:
      
      - Use of SPARK_HIVE (default false) to determine if we should include Hive in the assembly jar.
      - Analagous feature in Maven with -Phive (previously, there was no support for adding Hive to any of our jars produced by Maven)
      - assemble-deps fixed since we no longer use a different ASSEMBLY_DIR
      - avoid adding log message in compute-classpath.sh to the classpath :)
      
      Still TODO before mergeable:
      - We need to download the datanucleus jars outside of sbt. Perhaps we can have spark-class download them if SPARK_HIVE is set similar to how sbt downloads itself.
      - Spark SQL documentation updates.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #237 from aarondav/master and squashes the following commits:
      
      5dc4329 [Aaron Davidson] Typo fixes
      dd4f298 [Aaron Davidson] Doc update
      dd1a365 [Aaron Davidson] Eliminate need for SPARK_HIVE at runtime by d/ling datanucleus from Maven
      a9269b5 [Aaron Davidson] [WIP] Use SPARK_HIVE to determine if we include Hive in packaging
      41065584
    • Aaron Davidson's avatar
      SPARK-1349: spark-shell gets its own command history · 7ce52c4a
      Aaron Davidson authored
      Currently, spark-shell shares its command history with scala repl.
      
      This fix is simply a modification of the default FileBackedHistory file setting:
      https://github.com/scala/scala/blob/master/src/repl/scala/tools/nsc/interpreter/session/FileBackedHistory.scala#L77
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #267 from aarondav/repl and squashes the following commits:
      
      f9c62d2 [Aaron Davidson] SPARK-1349: spark-shell gets its own command history separate from scala repl
      7ce52c4a
    • Sean Owen's avatar
      SPARK-1387. Update build plugins, avoid plugin version warning, centralize versions · 856c50f5
      Sean Owen authored
      Another handful of small build changes to organize and standardize a bit, and avoid warnings:
      
      - Update Maven plugin versions for good measure
      - Since plugins need maven 3.0.4 already, require it explicitly (<3.0.4 had some bugs anyway)
      - Use variables to define versions across dependencies where they should move in lock step
      - ... and make this consistent between Maven/SBT
      
      OK, I also updated the JIRA URL while I was at it here.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #291 from srowen/SPARK-1387 and squashes the following commits:
      
      461eca1 [Sean Owen] Couldn't resist also updating JIRA location to new one
      c2d5cc5 [Sean Owen] Update plugins and Maven version; use variables consistently across Maven/SBT to define dependency versions that should stay in step.
      856c50f5
    • Egor Pakhomov's avatar
      [SPARK-1259] Make RDD locally iterable · e258e504
      Egor Pakhomov authored
      Author: Egor Pakhomov <pahomov.egor@gmail.com>
      
      Closes #156 from epahomov/SPARK-1259 and squashes the following commits:
      
      8ec8f24 [Egor Pakhomov] Make to local iterator shorter
      34aa300 [Egor Pakhomov] Fix toLocalIterator docs
      08363ef [Egor Pakhomov] SPARK-1259 from toLocallyIterable to toLocalIterator
      6a994eb [Egor Pakhomov] SPARK-1259 Make RDD locally iterable
      8be3dcf [Egor Pakhomov] SPARK-1259 Make RDD locally iterable
      33ecb17 [Egor Pakhomov] SPARK-1259 Make RDD locally iterable
      e258e504
    • witgo's avatar
      Fix SPARK-1420 The maven build error for Spark Catalyst · 7012ffaf
      witgo authored
      Author: witgo <witgo@qq.com>
      
      Closes #333 from witgo/SPARK-1420 and squashes the following commits:
      
      902519e [witgo] add dependency scala-reflect to catalyst
      7012ffaf
  5. Apr 05, 2014
    • Matei Zaharia's avatar
      SPARK-1421. Make MLlib work on Python 2.6 · 0b855167
      Matei Zaharia authored
      The reason it wasn't working was passing a bytearray to stream.write(), which is not supported in Python 2.6 but is in 2.7. (This array came from NumPy when we converted data to send it over to Java). Now we just convert those bytearrays to strings of bytes, which preserves nonprintable characters as well.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #335 from mateiz/mllib-python-2.6 and squashes the following commits:
      
      f26c59f [Matei Zaharia] Update docs to no longer say we need Python 2.7
      a84d6af [Matei Zaharia] SPARK-1421. Make MLlib work on Python 2.6
      0b855167
    • Sean Owen's avatar
      Fix for PR #195 for Java 6 · 890d63bd
      Sean Owen authored
      Use Java 6's recommended equivalent of Java 7's Logger.getGlobal() to retain Java 6 compatibility. See PR #195
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #334 from srowen/FixPR195ForJava6 and squashes the following commits:
      
      f92fbd3 [Sean Owen] Use Java 6's recommended equivalent of Java 7's Logger.getGlobal() to retain Java 6 compatibility
      890d63bd
    • Mridul Muralidharan's avatar
      [SPARK-1371] fix computePreferredLocations signature to not depend on underlying implementation · 6e88583a
      Mridul Muralidharan authored
      Change to Map and Set - not mutable HashMap and HashSet
      
      Author: Mridul Muralidharan <mridulm80@apache.org>
      
      Closes #302 from mridulm/master and squashes the following commits:
      
      df747af [Mridul Muralidharan] Address review comments
      17e2907 [Mridul Muralidharan] fix computePreferredLocations signature to not depend on underlying implementation
      6e88583a
    • Kay Ousterhout's avatar
      Remove the getStageInfo() method from SparkContext. · 2d0150c1
      Kay Ousterhout authored
      This method exposes the Stage objects, which are
      private to Spark and should not be exposed to the
      user.
      
      This method was added in https://github.com/apache/spark/commit/01d77f329f5878b7c8672bbdc1859f3ca95d759d; ccing @squito here in case there's a good reason to keep this!
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #308 from kayousterhout/remove_public_method and squashes the following commits:
      
      2e2f009 [Kay Ousterhout] Remove the getStageInfo() method from SparkContext.
      2d0150c1
    • Prashant Sharma's avatar
      HOTFIX for broken CI, by SPARK-1336 · 7c18428f
      Prashant Sharma authored
      Learnt about `set -o pipefail` is very useful.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: Prashant Sharma <scrapcodes@gmail.com>
      
      Closes #321 from ScrapCodes/hf-SPARK-1336 and squashes the following commits:
      
      9d22bc2 [Prashant Sharma] added comment why echo -e q exists.
      f865951 [Prashant Sharma] made error to match with word boundry so errors does not match. This is there to make sure build fails if provided SparkBuild has compile errors.
      7fffdf2 [Prashant Sharma] Removed a stray line.
      97379d8 [Prashant Sharma] HOTFIX for broken CI, by SPARK-1336
      7c18428f
  6. Apr 04, 2014
    • Prabeesh K's avatar
      small fix ( proogram -> program ) · 0acc7a02
      Prabeesh K authored
      Author: Prabeesh K <prabsmails@gmail.com>
      
      Closes #331 from prabeesh/patch-3 and squashes the following commits:
      
      9399eb5 [Prabeesh K] small fix(proogram -> program)
      0acc7a02
    • Michael Armbrust's avatar
      [SQL] SPARK-1366 Consistent sql function across different types of SQLContexts · 8de038eb
      Michael Armbrust authored
      Now users who want to use HiveQL should explicitly say `hiveql` or `hql`.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #319 from marmbrus/standardizeSqlHql and squashes the following commits:
      
      de68d0e [Michael Armbrust] Fix sampling test.
      fbe4a54 [Michael Armbrust] Make `sql` always use spark sql parser, users of hive context can now use hql or hiveql to run queries using HiveQL instead.
      8de038eb
    • Haoyuan Li's avatar
      SPARK-1305: Support persisting RDD's directly to Tachyon · b50ddfde
      Haoyuan Li authored
      Move the PR#468 of apache-incubator-spark to the apache-spark
      "Adding an option to persist Spark RDD blocks into Tachyon."
      
      Author: Haoyuan Li <haoyuan@cs.berkeley.edu>
      Author: RongGu <gurongwalker@gmail.com>
      
      Closes #158 from RongGu/master and squashes the following commits:
      
      72b7768 [Haoyuan Li] merge master
      9f7fa1b [Haoyuan Li] fix code style
      ae7834b [Haoyuan Li] minor cleanup
      a8b3ec6 [Haoyuan Li] merge master branch
      e0f4891 [Haoyuan Li] better check offheap.
      55b5918 [RongGu] address matei's comment on the replication of offHeap storagelevel
      7cd4600 [RongGu] remove some logic code for tachyonstore's replication
      51149e7 [RongGu] address aaron's comment on returning value of the remove() function in tachyonstore
      8adfcfa [RongGu] address arron's comment on inTachyonSize
      120e48a [RongGu] changed the root-level dir name in Tachyon
      5cc041c [Haoyuan Li] address aaron's comments
      9b97935 [Haoyuan Li] address aaron's comments
      d9a6438 [Haoyuan Li] fix for pspark
      77d2703 [Haoyuan Li] change python api.git status
      3dcace4 [Haoyuan Li] address matei's comments
      91fa09d [Haoyuan Li] address patrick's comments
      589eafe [Haoyuan Li] use TRY_CACHE instead of MUST_CACHE
      64348b2 [Haoyuan Li] update conf docs.
      ed73e19 [Haoyuan Li] Merge branch 'master' of github.com:RongGu/spark-1
      619a9a8 [RongGu] set number of directories in TachyonStore back to 64; added a TODO tag for duplicated code from the DiskStore
      be79d77 [RongGu] find a way to clean up some unnecessay metods and classed to make the code simpler
      49cc724 [Haoyuan Li] update docs with off_headp option
      4572f9f [RongGu] reserving the old apply function API of StorageLevel
      04301d3 [RongGu] rename StorageLevel.TACHYON to Storage.OFF_HEAP
      c9aeabf [RongGu] rename the StorgeLevel.TACHYON as StorageLevel.OFF_HEAP
      76805aa [RongGu] unifies the config properties name prefix; add the configs into docs/configuration.md
      e700d9c [RongGu] add the SparkTachyonHdfsLR example and some comments
      fd84156 [RongGu] use randomUUID to generate sparkapp directory name on tachyon;minor code style fix
      939e467 [Haoyuan Li] 0.4.1-thrift from maven central
      86a2eab [Haoyuan Li] tachyon 0.4.1-thrift is in the staging repo. but jenkins failed to download it. temporarily revert it back to 0.4.1
      16c5798 [RongGu] make the dependency on tachyon as tachyon-0.4.1-thrift
      eacb2e8 [RongGu] Merge branch 'master' of https://github.com/RongGu/spark-1
      bbeb4de [RongGu] fix the JsonProtocolSuite test failure problem
      6adb58f [RongGu] Merge branch 'master' of https://github.com/RongGu/spark-1
      d827250 [RongGu] fix JsonProtocolSuie test failure
      716e93b [Haoyuan Li] revert the version
      ca14469 [Haoyuan Li] bump tachyon version to 0.4.1-thrift
      2825a13 [RongGu] up-merging to the current master branch of the apache spark
      6a22c1a [Haoyuan Li] fix scalastyle
      8968b67 [Haoyuan Li] exclude more libraries from tachyon dependency to be the same as referencing tachyon-client.
      77be7e8 [RongGu] address mateiz's comment about the temp folder name problem. The implementation followed mateiz's advice.
      1dcadf9 [Haoyuan Li] typo
      bf278fa [Haoyuan Li] fix python tests
      e82909c [Haoyuan Li] minor cleanup
      776a56c [Haoyuan Li] address patrick's and ali's comments from the previous PR
      8859371 [Haoyuan Li] various minor fixes and clean up
      e3ddbba [Haoyuan Li] add doc to use Tachyon cache mode.
      fcaeab2 [Haoyuan Li] address Aaron's comment
      e554b1e [Haoyuan Li] add python code
      47304b3 [Haoyuan Li] make tachyonStore in BlockMananger lazy val; add more comments StorageLevels.
      dc8ef24 [Haoyuan Li] add old storelevel constructor
      e01a271 [Haoyuan Li] update tachyon 0.4.1
      8011a96 [RongGu] fix a brought-in mistake in StorageLevel
      70ca182 [RongGu] a bit change in comment
      556978b [RongGu] fix the scalastyle errors
      791189b [RongGu] "Adding an option to persist Spark RDD blocks into Tachyon." move the PR#468 of apache-incubator-spark to the apache-spark
      b50ddfde
    • Mark Hamstra's avatar
      [SPARK-1419] Bumped parent POM to apache 14 · 1347ebd4
      Mark Hamstra authored
      Keeping up-to-date with the parent, which includes some bugfixes.
      
      Author: Mark Hamstra <markhamstra@gmail.com>
      
      Closes #328 from markhamstra/Apache14 and squashes the following commits:
      
      3f19975 [Mark Hamstra] Bumped parent POM to apache 14
      1347ebd4
Loading