Skip to content
Snippets Groups Projects
  1. Apr 02, 2016
    • Dongjoon Hyun's avatar
      [MINOR][DOCS] Use multi-line JavaDoc comments in Scala code. · 4a6e78ab
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to fix all Scala-Style multiline comments into Java-Style multiline comments in Scala codes.
      (All comment-only changes over 77 files: +786 lines, −747 lines)
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12130 from dongjoon-hyun/use_multiine_javadoc_comments.
      4a6e78ab
  2. Mar 16, 2016
  3. Mar 11, 2016
    • Josh Rosen's avatar
      [SPARK-13294][PROJECT INFRA] Remove MiMa's dependency on spark-class / Spark assembly · 6ca990fb
      Josh Rosen authored
      This patch removes the need to build a full Spark assembly before running the `dev/mima` script.
      
      - I modified the `tools` project to remove a direct dependency on Spark, so `sbt/sbt tools/fullClasspath` will now return the classpath for the `GenerateMIMAIgnore` class itself plus its own dependencies.
         - This required me to delete two classes full of dead code that we don't use anymore
      - `GenerateMIMAIgnore` now uses [ClassUtil](http://software.clapper.org/classutil/) to find all of the Spark classes rather than our homemade JAR traversal code. The problem in our own code was that it didn't handle folders of classes properly, which is necessary in order to generate excludes with an assembly-free Spark build.
      - `./dev/mima` no longer runs through `spark-class`, eliminating the need to reason about classpath ordering between `SPARK_CLASSPATH` and the assembly.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11178 from JoshRosen/remove-assembly-in-run-tests.
      6ca990fb
  4. Mar 03, 2016
  5. Mar 02, 2016
    • Dongjoon Hyun's avatar
      [SPARK-13627][SQL][YARN] Fix simple deprecation warnings. · 9c274ac4
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to fix the following deprecation warnings.
        * MethodSymbolApi.paramss--> paramLists
        * AnnotationApi.tpe -> tree.tpe
        * BufferLike.readOnly -> toList.
        * StandardNames.nme -> termNames
        * scala.tools.nsc.interpreter.AbstractFileClassLoader -> scala.reflect.internal.util.AbstractFileClassLoader
        * TypeApi.declarations-> decls
      
      ## How was this patch tested?
      
      Check the compile build log and pass the tests.
      ```
      ./build/sbt
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11479 from dongjoon-hyun/SPARK-13627.
      9c274ac4
  6. Jan 08, 2016
  7. Dec 31, 2015
  8. Nov 17, 2015
  9. Aug 25, 2015
  10. Jul 14, 2015
    • Josh Rosen's avatar
      [SPARK-8962] Add Scalastyle rule to ban direct use of Class.forName; fix existing uses · 11e5c372
      Josh Rosen authored
      This pull request adds a Scalastyle regex rule which fails the style check if `Class.forName` is used directly.  `Class.forName` always loads classes from the default / system classloader, but in a majority of cases, we should be using Spark's own `Utils.classForName` instead, which tries to load classes from the current thread's context classloader and falls back to the classloader which loaded Spark when the context classloader is not defined.
      
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/7350)
      <!-- Reviewable:end -->
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7350 from JoshRosen/ban-Class.forName and squashes the following commits:
      
      e3e96f7 [Josh Rosen] Merge remote-tracking branch 'origin/master' into ban-Class.forName
      c0b7885 [Josh Rosen] Hopefully fix the last two cases
      d707ba7 [Josh Rosen] Fix uses of Class.forName that I missed in my first cleanup pass
      046470d [Josh Rosen] Merge remote-tracking branch 'origin/master' into ban-Class.forName
      62882ee [Josh Rosen] Fix uses of Class.forName or add exclusion.
      d9abade [Josh Rosen] Add stylechecker rule to ban uses of Class.forName
      11e5c372
  11. Jul 10, 2015
    • Jonathan Alter's avatar
      [SPARK-7977] [BUILD] Disallowing println · e14b545d
      Jonathan Alter authored
      Author: Jonathan Alter <jonalter@users.noreply.github.com>
      
      Closes #7093 from jonalter/SPARK-7977 and squashes the following commits:
      
      ccd44cc [Jonathan Alter] Changed println to log in ThreadingSuite
      7fcac3e [Jonathan Alter] Reverting to println in ThreadingSuite
      10724b6 [Jonathan Alter] Changing some printlns to logs in tests
      eeec1e7 [Jonathan Alter] Merge branch 'master' of github.com:apache/spark into SPARK-7977
      0b1dcb4 [Jonathan Alter] More println cleanup
      aedaf80 [Jonathan Alter] Merge branch 'master' of github.com:apache/spark into SPARK-7977
      925fd98 [Jonathan Alter] Merge branch 'master' of github.com:apache/spark into SPARK-7977
      0c16fa3 [Jonathan Alter] Replacing some printlns with logs
      45c7e05 [Jonathan Alter] Merge branch 'master' of github.com:apache/spark into SPARK-7977
      5c8e283 [Jonathan Alter] Allowing println in audit-release examples
      5b50da1 [Jonathan Alter] Allowing printlns in example files
      ca4b477 [Jonathan Alter] Merge branch 'master' of github.com:apache/spark into SPARK-7977
      83ab635 [Jonathan Alter] Fixing new printlns
      54b131f [Jonathan Alter] Merge branch 'master' of github.com:apache/spark into SPARK-7977
      1cd8a81 [Jonathan Alter] Removing some unnecessary comments and printlns
      b837c3a [Jonathan Alter] Disallowing println
      e14b545d
  12. May 01, 2015
    • Sandy Ryza's avatar
      [SPARK-4550] In sort-based shuffle, store map outputs in serialized form · 0a2b15ce
      Sandy Ryza authored
      Refer to the JIRA for the design doc and some perf results.
      
      I wanted to call out some of the more possibly controversial changes up front:
      * Map outputs are only stored in serialized form when Kryo is in use.  I'm still unsure whether Java-serialized objects can be relocated.  At the very least, Java serialization writes out a stream header which causes problems with the current approach, so I decided to leave investigating this to future work.
      * The shuffle now explicitly operates on key-value pairs instead of any object.  Data is written to shuffle files in alternating keys and values instead of key-value tuples.  `BlockObjectWriter.write` now accepts a key argument and a value argument instead of any object.
      * The map output buffer can hold a max of Integer.MAX_VALUE bytes.  Though this wouldn't be terribly difficult to change.
      * When spilling occurs, the objects that still in memory at merge time end up serialized and deserialized an extra time.
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #4450 from sryza/sandy-spark-4550 and squashes the following commits:
      
      8c70dd9 [Sandy Ryza] Fix serialization
      9c16fe6 [Sandy Ryza] Fix a couple tests and move getAutoReset to KryoSerializerInstance
      6c54e06 [Sandy Ryza] Fix scalastyle
      d8462d8 [Sandy Ryza] SPARK-4550
      0a2b15ce
  13. Apr 03, 2015
    • Reynold Xin's avatar
      [SPARK-6428] Turn on explicit type checking for public methods. · 82701ee2
      Reynold Xin authored
      This builds on my earlier pull requests and turns on the explicit type checking in scalastyle.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #5342 from rxin/SPARK-6428 and squashes the following commits:
      
      7b531ab [Reynold Xin] import ordering
      2d9a8a5 [Reynold Xin] jl
      e668b1c [Reynold Xin] override
      9b9e119 [Reynold Xin] Parenthesis.
      82e0cf5 [Reynold Xin] [SPARK-6428] Turn on explicit type checking for public methods.
      82701ee2
  14. Apr 02, 2015
    • Patrick Wendell's avatar
      [SPARK-6627] Some clean-up in shuffle code. · 6562787b
      Patrick Wendell authored
      Before diving into review #4450 I did a look through the existing shuffle
      code to learn how it works. Unfortunately, there are some very
      confusing things in this code. This patch makes a few small changes
      to simplify things. It is not easily to concisely describe the changes
      because of how convoluted the issues were, but they are fairly small
      logically:
      
      1. There is a trait named `ShuffleBlockManager` that only deals with
         one logical function which is retrieving shuffle block data given shuffle
         block coordinates. This trait has two implementors FileShuffleBlockManager
         and IndexShuffleBlockManager. Confusingly the vast majority of those
         implementations have nothing to do with this particular functionality.
         So I've renamed the trait to ShuffleBlockResolver and documented it.
      2. The aforementioned trait had two almost identical methods, for no good
         reason. I removed one method (getBytes) and modified callers to use the
         other one. I think the behavior is preserved in all cases.
      3. The sort shuffle code uses an identifier "0" in the reduce slot of a
         BlockID as a placeholder. I made it into a constant since it needs to
         be consistent across multiple places.
      
      I think for (3) there is actually a better solution that would avoid the
      need to do this type of workaround/hack in the first place, but it's more
      complex so I'm punting it for now.
      
      Author: Patrick Wendell <patrick@databricks.com>
      
      Closes #5286 from pwendell/cleanup and squashes the following commits:
      
      c71fbc7 [Patrick Wendell] Open interface back up for testing
      f36edd5 [Patrick Wendell] Code review feedback
      d1c0494 [Patrick Wendell] Style fix
      a406079 [Patrick Wendell] [HOTFIX] Some clean-up in shuffle code.
      6562787b
  15. Dec 30, 2014
    • Josh Rosen's avatar
      [SPARK-1010] Clean up uses of System.setProperty in unit tests · 352ed6bb
      Josh Rosen authored
      Several of our tests call System.setProperty (or test code which implicitly sets system properties) and don't always reset/clear the modified properties, which can create ordering dependencies between tests and cause hard-to-diagnose failures.
      
      This patch removes most uses of System.setProperty from our tests, since in most cases we can use SparkConf to set these configurations (there are a few exceptions, including the tests of SparkConf itself).
      
      For the cases where we continue to use System.setProperty, this patch introduces a `ResetSystemProperties` ScalaTest mixin class which snapshots the system properties before individual tests and to automatically restores them on test completion / failure.  See the block comment at the top of the ResetSystemProperties class for more details.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #3739 from JoshRosen/cleanup-system-properties-in-tests and squashes the following commits:
      
      0236d66 [Josh Rosen] Replace setProperty uses in two example programs / tools
      3888fe3 [Josh Rosen] Remove setProperty use in LocalJavaStreamingContext
      4f4031d [Josh Rosen] Add note on why SparkSubmitSuite needs ResetSystemProperties
      4742a5b [Josh Rosen] Clarify ResetSystemProperties trait inheritance ordering.
      0eaf0b6 [Josh Rosen] Remove setProperty call in TaskResultGetterSuite.
      7a3d224 [Josh Rosen] Fix trait ordering
      3fdb554 [Josh Rosen] Remove setProperty call in TaskSchedulerImplSuite
      bee20df [Josh Rosen] Remove setProperty calls in SparkContextSchedulerCreationSuite
      655587c [Josh Rosen] Remove setProperty calls in JobCancellationSuite
      3f2f955 [Josh Rosen] Remove System.setProperty calls in DistributedSuite
      cfe9cce [Josh Rosen] Remove use of system properties in SparkContextSuite
      8783ab0 [Josh Rosen] Remove TestUtils.setSystemProperty, since it is subsumed by the ResetSystemProperties trait.
      633a84a [Josh Rosen] Remove use of system properties in FileServerSuite
      25bfce2 [Josh Rosen] Use ResetSystemProperties in UtilsSuite
      1d1aa5a [Josh Rosen] Use ResetSystemProperties in SizeEstimatorSuite
      dd9492b [Josh Rosen] Use ResetSystemProperties in AkkaUtilsSuite
      b0daff2 [Josh Rosen] Use ResetSystemProperties in BlockManagerSuite
      e9ded62 [Josh Rosen] Use ResetSystemProperties in TaskSchedulerImplSuite
      5b3cb54 [Josh Rosen] Use ResetSystemProperties in SparkListenerSuite
      0995c4b [Josh Rosen] Use ResetSystemProperties in SparkContextSchedulerCreationSuite
      c83ded8 [Josh Rosen] Use ResetSystemProperties in SparkConfSuite
      51aa870 [Josh Rosen] Use withSystemProperty in ShuffleSuite
      60a63a1 [Josh Rosen] Use ResetSystemProperties in JobCancellationSuite
      14a92e4 [Josh Rosen] Use withSystemProperty in FileServerSuite
      628f46c [Josh Rosen] Use ResetSystemProperties in DistributedSuite
      9e3e0dd [Josh Rosen] Add ResetSystemProperties test fixture mixin; use it in SparkSubmitSuite.
      4dcea38 [Josh Rosen] Move withSystemProperty to TestUtils class.
      352ed6bb
  16. Sep 15, 2014
    • Prashant Sharma's avatar
      [SPARK-3433][BUILD] Fix for Mima false-positives with @DeveloperAPI and @Experimental annotations. · ecf0c029
      Prashant Sharma authored
      Actually false positive reported was due to mima generator not picking up the new jars in presence of old jars(theoretically this should not have happened.). So as a workaround, ran them both separately and just append them together.
      
      Author: Prashant Sharma <prashant@apache.org>
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #2285 from ScrapCodes/mima-fix and squashes the following commits:
      
      093c76f [Prashant Sharma] Update mima
      59012a8 [Prashant Sharma] Update mima
      35b6c71 [Prashant Sharma] SPARK-3433 Fix for Mima false-positives with @DeveloperAPI and @Experimental annotations.
      ecf0c029
  17. Aug 30, 2014
    • Raymond Liu's avatar
      [SPARK-2288] Hide ShuffleBlockManager behind ShuffleManager · acea9280
      Raymond Liu authored
      By Hiding the shuffleblockmanager behind Shufflemanager, we decouple the shuffle data's block mapping management work from Diskblockmananger. This give a more clear interface and more easy for other shuffle manager to implement their own block management logic. the jira ticket have more details.
      
      Author: Raymond Liu <raymond.liu@intel.com>
      
      Closes #1241 from colorant/shuffle and squashes the following commits:
      
      0e01ae3 [Raymond Liu] Move ShuffleBlockmanager behind shuffleManager
      acea9280
  18. Aug 06, 2014
    • Sandy Ryza's avatar
      SPARK-2566. Update ShuffleWriteMetrics incrementally · 4e982364
      Sandy Ryza authored
      I haven't tested this out on a cluster yet, but wanted to make sure the approach (passing ShuffleWriteMetrics down to DiskBlockObjectWriter) was ok
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #1481 from sryza/sandy-spark-2566 and squashes the following commits:
      
      8090d88 [Sandy Ryza] Fix ExternalSorter
      b2a62ed [Sandy Ryza] Fix more test failures
      8be6218 [Sandy Ryza] Fix test failures and mark a couple variables private
      c5e68e5 [Sandy Ryza] SPARK-2566. Update ShuffleWriteMetrics incrementally
      4e982364
  19. Aug 01, 2014
    • Aaron Davidson's avatar
      SPARK-2791: Fix committing, reverting and state tracking in shuffle file consolidation · 78f2af58
      Aaron Davidson authored
      All changes from this PR are by mridulm and are drawn from his work in #1609. This patch is intended to fix all major issues related to shuffle file consolidation that mridulm found, while minimizing changes to the code, with the hope that it may be more easily merged into 1.1.
      
      This patch is **not** intended as a replacement for #1609, which provides many additional benefits, including fixes to ExternalAppendOnlyMap, improvements to DiskBlockObjectWriter's API, and several new unit tests.
      
      If it is feasible to merge #1609 for the 1.1 deadline, that is a preferable option.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #1678 from aarondav/consol and squashes the following commits:
      
      53b3f6d [Aaron Davidson] Correct behavior when writing unopened file
      701d045 [Aaron Davidson] Rebase with sort-based shuffle
      9160149 [Aaron Davidson] SPARK-2532: Minimal shuffle consolidation fixes
      78f2af58
  20. Jul 31, 2014
  21. Jul 23, 2014
    • Prashant Sharma's avatar
      [SPARK-2549] Functions defined inside of other functions trigger failures · 9b763329
      Prashant Sharma authored
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #1510 from ScrapCodes/SPARK-2549/fun-in-fun and squashes the following commits:
      
      9458bc5 [Prashant Sharma] Tested by removing an inner function from excludes.
      bc03b1c [Prashant Sharma] SPARK-2549 Functions defined inside of other functions trigger failures
      9b763329
  22. Jun 11, 2014
    • Prashant Sharma's avatar
      [SPARK-2069] MIMA false positives · 5b754b45
      Prashant Sharma authored
      Fixes SPARK 2070 and 2071
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #1021 from ScrapCodes/SPARK-2070/package-private-methods and squashes the following commits:
      
      7979a57 [Prashant Sharma] addressed code review comments
      558546d [Prashant Sharma] A little fancy error message.
      59275ab [Prashant Sharma] SPARK-2071 Mima ignores classes and its members from previous versions too.
      0c4ff2b [Prashant Sharma] SPARK-2070 Ignore methods along with annotated classes.
      5b754b45
  23. Jun 01, 2014
    • Patrick Wendell's avatar
      Better explanation for how to use MIMA excludes. · d17d2214
      Patrick Wendell authored
      This patch does a few things:
      1. We have a file MimaExcludes.scala exclusively for excludes.
      2. The test runner tells users about that file if a test fails.
      3. I've added back the excludes used from 0.9->1.0. We should keep
         these in the project as an official audit trail of times where
         we decided to make exceptions.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #937 from pwendell/mima and squashes the following commits:
      
      7ee0db2 [Patrick Wendell] Better explanation for how to use MIMA excludes.
      d17d2214
  24. May 30, 2014
    • Prashant Sharma's avatar
      [SPARK-1820] Make GenerateMimaIgnore @DeveloperApi annotation aware. · eeee978a
      Prashant Sharma authored
      We add all the classes annotated as `DeveloperApi` to `~/.mima-excludes`.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: nikhil7sh <nikhilsharmalnmiit@gmail.ccom>
      
      Closes #904 from ScrapCodes/SPARK-1820/ignore-Developer-Api and squashes the following commits:
      
      de944f9 [Prashant Sharma] Code review.
      e3c5215 [Prashant Sharma] Incorporated patrick's suggestions and fixed the scalastyle build.
      9983a42 [nikhil7sh] [SPARK-1820] Make GenerateMimaIgnore @DeveloperApi annotation aware
      eeee978a
  25. Apr 24, 2014
    • Michael Armbrust's avatar
      SPARK-1494 Don't initialize classes loaded by MIMA excludes, attempt 2 · c5c1916d
      Michael Armbrust authored
      [WIP]
      
      Looks like scala reflection was invoking the static initializer:
      ```
      ...
      	at org.apache.spark.sql.test.TestSQLContext$.<init>(TestSQLContext.scala:25)
      	at org.apache.spark.sql.test.TestSQLContext$.<clinit>(TestSQLContext.scala)
      	at java.lang.Class.forName0(Native Method)
      	at java.lang.Class.forName(Class.java:270)
      	at scala.reflect.runtime.JavaMirrors$JavaMirror.javaClass(JavaMirrors.scala:500)
      	at scala.reflect.runtime.JavaMirrors$JavaMirror.tryJavaClass(JavaMirrors.scala:505)
      	at scala.reflect.runtime.SymbolLoaders$PackageScope.lookupEntry(SymbolLoaders.scala:109)
      ...
      ```
      
      Need to make sure that this doesn't change the exclusion semantics before merging.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #526 from marmbrus/mima and squashes the following commits:
      
      8168dea [Michael Armbrust] Spurious change
      afba262 [Michael Armbrust] Prevent Scala reflection from running static class initializer.
      c5c1916d
  26. Apr 23, 2014
    • Michael Armbrust's avatar
      SPARK-1494 Don't initialize classes loaded by MIMA excludes. · 8e950813
      Michael Armbrust authored
      [WIP]  Just seeing how Jenkins likes this...
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #494 from marmbrus/mima and squashes the following commits:
      
      6eec616 [Michael Armbrust] Force hive tests to run.
      acaf682 [Michael Armbrust] Don't initialize loaded classes.
      8e950813
  27. Apr 15, 2014
    • Ahir Reddy's avatar
      SPARK-1374: PySpark API for SparkSQL · c99bcb7f
      Ahir Reddy authored
      An initial API that exposes SparkSQL functionality in PySpark. A PythonRDD composed of dictionaries, with string keys and primitive values (boolean, float, int, long, string) can be converted into a SchemaRDD that supports sql queries.
      
      ```
      from pyspark.context import SQLContext
      sqlCtx = SQLContext(sc)
      rdd = sc.parallelize([{"field1" : 1, "field2" : "row1"}, {"field1" : 2, "field2": "row2"}, {"field1" : 3, "field2": "row3"}])
      srdd = sqlCtx.applySchema(rdd)
      sqlCtx.registerRDDAsTable(srdd, "table1")
      srdd2 = sqlCtx.sql("SELECT field1 AS f1, field2 as f2 from table1")
      srdd2.collect()
      ```
      The last line yields ```[{"f1" : 1, "f2" : "row1"}, {"f1" : 2, "f2": "row2"}, {"f1" : 3, "f2": "row3"}]```
      
      Author: Ahir Reddy <ahirreddy@gmail.com>
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #363 from ahirreddy/pysql and squashes the following commits:
      
      0294497 [Ahir Reddy] Updated log4j properties to supress Hive Warns
      307d6e0 [Ahir Reddy] Style fix
      6f7b8f6 [Ahir Reddy] Temporary fix MIMA checker. Since we now assemble Spark jar with Hive, we don't want to check the interfaces of all of our hive dependencies
      3ef074a [Ahir Reddy] Updated documentation because classes moved to sql.py
      29245bf [Ahir Reddy] Cache underlying SchemaRDD instead of generating and caching PythonRDD
      f2312c7 [Ahir Reddy] Moved everything into sql.py
      a19afe4 [Ahir Reddy] Doc fixes
      6d658ba [Ahir Reddy] Remove the metastore directory created by the HiveContext tests in SparkSQL
      521ff6d [Ahir Reddy] Trying to get spark to build with hive
      ab95eba [Ahir Reddy] Set SPARK_HIVE=true on jenkins
      ded03e7 [Ahir Reddy] Added doc test for HiveContext
      22de1d4 [Ahir Reddy] Fixed maven pyrolite dependency
      e4da06c [Ahir Reddy] Display message if hive is not built into spark
      227a0be [Michael Armbrust] Update API links. Fix Hive example.
      58e2aa9 [Michael Armbrust] Build Docs for pyspark SQL Api.  Minor fixes.
      4285340 [Michael Armbrust] Fix building of Hive API Docs.
      38a92b0 [Michael Armbrust] Add note to future non-python developers about python docs.
      337b201 [Ahir Reddy] Changed com.clearspring.analytics stream version from 2.4.0 to 2.5.1 to match SBT build, and added pyrolite to maven build
      40491c9 [Ahir Reddy] PR Changes + Method Visibility
      1836944 [Michael Armbrust] Fix comments.
      e00980f [Michael Armbrust] First draft of python sql programming guide.
      b0192d3 [Ahir Reddy] Added Long, Double and Boolean as usable types + unit test
      f98a422 [Ahir Reddy] HiveContexts
      79621cf [Ahir Reddy] cleaning up cruft
      b406ba0 [Ahir Reddy] doctest formatting
      20936a5 [Ahir Reddy] Added tests and documentation
      e4d21b4 [Ahir Reddy] Added pyrolite dependency
      79f739d [Ahir Reddy] added more tests
      7515ba0 [Ahir Reddy] added more tests :)
      d26ec5e [Ahir Reddy] added test
      e9f5b8d [Ahir Reddy] adding tests
      906d180 [Ahir Reddy] added todo explaining cost of creating Row object in python
      251f99d [Ahir Reddy] for now only allow dictionaries as input
      09b9980 [Ahir Reddy] made jrdd explicitly lazy
      c608947 [Ahir Reddy] SchemaRDD now has all RDD operations
      725c91e [Ahir Reddy] awesome row objects
      55d1c76 [Ahir Reddy] return row objects
      4fe1319 [Ahir Reddy] output dictionaries correctly
      be079de [Ahir Reddy] returning dictionaries works
      cd5f79f [Ahir Reddy] Switched to using Scala SQLContext
      e948bd9 [Ahir Reddy] yippie
      4886052 [Ahir Reddy] even better
      c0fb1c6 [Ahir Reddy] more working
      043ca85 [Ahir Reddy] working
      5496f9f [Ahir Reddy] doesn't crash
      b8b904b [Ahir Reddy] Added schema rdd class
      67ba875 [Ahir Reddy] java to python, and python to java
      bcc0f23 [Ahir Reddy] Java to python
      ab6025d [Ahir Reddy] compiling
      c99bcb7f
  28. Apr 14, 2014
    • Sean Owen's avatar
      SPARK-1488. Resolve scalac feature warnings during build · 0247b5c5
      Sean Owen authored
      For your consideration: scalac currently notes a number of feature warnings during compilation:
      
      ```
      [warn] there were 65 feature warning(s); re-run with -feature for details
      ```
      
      Warnings are like:
      
      ```
      [warn] /Users/srowen/Documents/spark/core/src/main/scala/org/apache/spark/SparkContext.scala:1261: implicit conversion method rddToPairRDDFunctions should be enabled
      [warn] by making the implicit value scala.language.implicitConversions visible.
      [warn] This can be achieved by adding the import clause 'import scala.language.implicitConversions'
      [warn] or by setting the compiler option -language:implicitConversions.
      [warn] See the Scala docs for value scala.language.implicitConversions for a discussion
      [warn] why the feature should be explicitly enabled.
      [warn]   implicit def rddToPairRDDFunctions[K: ClassTag, V: ClassTag](rdd: RDD[(K, V)]) =
      [warn]                ^
      ```
      
      scalac is suggesting that it's just best practice to explicitly enable certain language features by importing them where used.
      
      This PR simply adds the imports it suggests (and squashes one other Java warning along the way). This leaves just deprecation warnings in the build.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #404 from srowen/SPARK-1488 and squashes the following commits:
      
      8598980 [Sean Owen] Quiet scalac warnings about language features by explicitly importing language features.
      39bc831 [Sean Owen] Enable -feature in scalac to emit language feature warnings
      0247b5c5
  29. Apr 09, 2014
    • Patrick Wendell's avatar
      SPARK-1093: Annotate developer and experimental API's · 87bd1f9e
      Patrick Wendell authored
      This patch marks some existing classes as private[spark] and adds two types of API annotations:
      - `EXPERIMENTAL API` = experimental user-facing module
      - `DEVELOPER API - UNSTABLE` = developer-facing API that might change
      
      There is some discussion of the different mechanisms for doing this here:
      https://issues.apache.org/jira/browse/SPARK-1081
      
      I was pretty aggressive with marking things private. Keep in mind that if we want to open something up in the future we can, but we can never reduce visibility.
      
      A few notes here:
      - In the past we've been inconsistent with the visiblity of the X-RDD classes. This patch marks them private whenever there is an existing function in RDD that can directly creat them (e.g. CoalescedRDD and rdd.coalesce()). One trade-off here is users can't subclass them.
      - Noted that compression and serialization formats don't have to be wire compatible across versions.
      - Compression codecs and serialization formats are semi-private as users typically don't instantiate them directly.
      - Metrics sources are made private - user only interacts with them through Spark's reflection
      
      Author: Patrick Wendell <pwendell@gmail.com>
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #274 from pwendell/private-apis and squashes the following commits:
      
      44179e4 [Patrick Wendell] Merge remote-tracking branch 'apache-github/master' into private-apis
      042c803 [Patrick Wendell] spark.annotations -> spark.annotation
      bfe7b52 [Patrick Wendell] Adding experimental for approximate counts
      8d0c873 [Patrick Wendell] Warning in SparkEnv
      99b223a [Patrick Wendell] Cleaning up annotations
      e849f64 [Patrick Wendell] Merge pull request #2 from andrewor14/annotations
      982a473 [Andrew Or] Generalize jQuery matching for non Spark-core API docs
      a01c076 [Patrick Wendell] Merge pull request #1 from andrewor14/annotations
      c1bcb41 [Andrew Or] DeveloperAPI -> DeveloperApi
      0d48908 [Andrew Or] Comments and new lines (minor)
      f3954e0 [Andrew Or] Add identifier tags in comments to work around scaladocs bug
      99192ef [Andrew Or] Dynamically add badges based on annotations
      824011b [Andrew Or] Add support for injecting arbitrary JavaScript to API docs
      037755c [Patrick Wendell] Some changes after working with andrew or
      f7d124f [Patrick Wendell] Small fixes
      c318b24 [Patrick Wendell] Use CSS styles
      e4c76b9 [Patrick Wendell] Logging
      f390b13 [Patrick Wendell] Better visibility for workaround constructors
      d6b0afd [Patrick Wendell] Small chang to existing constructor
      403ba52 [Patrick Wendell] Style fix
      870a7ba [Patrick Wendell] Work around for SI-8479
      7fb13b2 [Patrick Wendell] Changes to UnionRDD and EmptyRDD
      4a9e90c [Patrick Wendell] EXPERIMENTAL API --> EXPERIMENTAL
      c581dce [Patrick Wendell] Changes after building against Shark.
      8452309 [Patrick Wendell] Style fixes
      1ed27d2 [Patrick Wendell] Formatting and coloring of badges
      cd7a465 [Patrick Wendell] Code review feedback
      2f706f1 [Patrick Wendell] Don't use floats
      542a736 [Patrick Wendell] Small fixes
      cf23ec6 [Patrick Wendell] Marking GraphX as alpha
      d86818e [Patrick Wendell] Another naming change
      5a76ed6 [Patrick Wendell] More visiblity clean-up
      42c1f09 [Patrick Wendell] Using better labels
      9d48cbf [Patrick Wendell] Initial pass
      87bd1f9e
  30. Mar 24, 2014
    • Patrick Wendell's avatar
      SPARK-1094 Support MiMa for reporting binary compatibility accross versions. · dc126f21
      Patrick Wendell authored
      This adds some changes on top of the initial work by @scrapcodes in #20:
      
      The goal here is to do automated checking of Spark commits to determine whether they break binary compatibility.
      
      1. Special case for inner classes of package-private objects.
      2. Made tools classes accessible when running `spark-class`.
      3. Made some declared types in MLLib more general.
      4. Various other improvements to exclude-generation script.
      5. In-code documentation.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: Prashant Sharma <scrapcodes@gmail.com>
      
      Closes #207 from pwendell/mima and squashes the following commits:
      
      22ae267 [Patrick Wendell] New binary changes after upmerge
      6c2030d [Patrick Wendell] Merge remote-tracking branch 'apache/master' into mima
      3666cf1 [Patrick Wendell] Minor style change
      0e0f570 [Patrick Wendell] Small fix and removing directory listings
      647c547 [Patrick Wendell] Reveiw feedback.
      c39f3b5 [Patrick Wendell] Some enhancements to binary checking.
      4c771e0 [Prashant Sharma] Added a tool to generate mima excludes and also adapted build to pick automatically.
      b551519 [Prashant Sharma] adding a new exclude after rebasing with master
      651844c [Prashant Sharma] Support MiMa for reporting binary compatibility accross versions.
      dc126f21
  31. Feb 09, 2014
    • Patrick Wendell's avatar
      Merge pull request #557 from ScrapCodes/style. Closes #557. · b69f8b2a
      Patrick Wendell authored
      SPARK-1058, Fix Style Errors and Add Scala Style to Spark Build.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      Author: Prashant Sharma <scrapcodes@gmail.com>
      
      == Merge branch commits ==
      
      commit 1a8bd1c059b842cb95cc246aaea74a79fec684f4
      Author: Prashant Sharma <scrapcodes@gmail.com>
      Date:   Sun Feb 9 17:39:07 2014 +0530
      
          scala style fixes
      
      commit f91709887a8e0b608c5c2b282db19b8a44d53a43
      Author: Patrick Wendell <pwendell@gmail.com>
      Date:   Fri Jan 24 11:22:53 2014 -0800
      
          Adding scalastyle snapshot
      b69f8b2a
  32. Jan 13, 2014
  33. Jan 12, 2014
  34. Sep 01, 2013
  35. Aug 19, 2013
  36. Aug 18, 2013
  37. Aug 11, 2013
  38. Jul 22, 2013
Loading