Skip to content
Snippets Groups Projects
  1. Sep 08, 2016
    • Gurvinder Singh's avatar
      [SPARK-15487][WEB UI] Spark Master UI to reverse proxy Application and Workers UI · 92ce8d48
      Gurvinder Singh authored
      ## What changes were proposed in this pull request?
      
      This pull request adds the functionality to enable accessing worker and application UI through master UI itself. Thus helps in accessing SparkUI when running spark cluster in closed networks e.g. Kubernetes. Cluster admin needs to expose only spark master UI and rest of the UIs can be in the private network, master UI will reverse proxy the connection request to corresponding resource. It adds the path for workers/application UIs as
      
      WorkerUI: <http/https>://master-publicIP:<port>/target/workerID/
      ApplicationUI: <http/https>://master-publicIP:<port>/target/appID/
      
      This makes it easy for users to easily protect the Spark master cluster access by putting some reverse proxy e.g. https://github.com/bitly/oauth2_proxy
      
      ## How was this patch tested?
      
      The functionality has been tested manually and there is a unit test too for testing access to worker UI with reverse proxy address.
      
      pwendell bomeng BryanCutler can you please review it, thanks.
      
      Author: Gurvinder Singh <gurvinder.singh@uninett.no>
      
      Closes #13950 from gurvindersingh/rproxy.
      92ce8d48
  2. Sep 01, 2016
    • Shixiong Zhu's avatar
      [SPARK-17318][TESTS] Fix ReplSuite replicating blocks of object with class defined in repl again · 21c0a4fe
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      After digging into the logs, I noticed the failure is because in this test, it starts a local cluster with 2 executors. However, when SparkContext is created, executors may be still not up. When one of the executor is not up during running the job, the blocks won't be replicated.
      
      This PR just adds a wait loop before running the job to fix the flaky test.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #14905 from zsxwing/SPARK-17318-2.
      21c0a4fe
  3. Aug 30, 2016
  4. Aug 22, 2016
    • Eric Liang's avatar
      [SPARK-16550][SPARK-17042][CORE] Certain classes fail to deserialize in block manager replication · 8e223ea6
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      This is a straightforward clone of JoshRosen 's original patch. I have follow-up changes to fix block replication for repl-defined classes as well, but those appear to be flaking tests so I'm going to leave that for SPARK-17042
      
      ## How was this patch tested?
      
      End-to-end test in ReplSuite (also more tests in DistributedSuite from the original patch).
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #14311 from ericl/spark-16550.
      8e223ea6
  5. Aug 17, 2016
    • Steve Loughran's avatar
      [SPARK-16736][CORE][SQL] purge superfluous fs calls · cc97ea18
      Steve Loughran authored
      A review of the code, working back from Hadoop's `FileSystem.exists()` and `FileSystem.isDirectory()` code, then removing uses of the calls when superfluous.
      
      1. delete is harmless if called on a nonexistent path, so don't do any checks before deletes
      1. any `FileSystem.exists()`  check before `getFileStatus()` or `open()` is superfluous as the operation itself does the check. Instead the `FileNotFoundException` is caught and triggers the downgraded path. When a `FileNotFoundException` was thrown before, the code still creates a new FNFE with the error messages. Though now the inner exceptions are nested, for easier diagnostics.
      
      Initially, relying on Jenkins test runs.
      
      One troublespot here is that some of the codepaths are clearly error situations; it's not clear that they have coverage anyway. Trying to create the failure conditions in tests would be ideal, but it will also be hard.
      
      Author: Steve Loughran <stevel@apache.org>
      
      Closes #14371 from steveloughran/cloud/SPARK-16736-superfluous-fs-calls.
      cc97ea18
  6. Aug 08, 2016
    • Holden Karau's avatar
      [SPARK-16779][TRIVIAL] Avoid using postfix operators where they do not add... · 9216901d
      Holden Karau authored
      [SPARK-16779][TRIVIAL] Avoid using postfix operators where they do not add much and remove whitelisting
      
      ## What changes were proposed in this pull request?
      
      Avoid using postfix operation for command execution in SQLQuerySuite where it wasn't whitelisted and audit existing whitelistings removing postfix operators from most places. Some notable places where postfix operation remains is in the XML parsing & time units (seconds, millis, etc.) where it arguably can improve readability.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #14407 from holdenk/SPARK-16779.
      9216901d
  7. Aug 03, 2016
    • Stefan Schulze's avatar
      [SPARK-16770][BUILD] Fix JLine dependency management and version (Sca… · 4775eb41
      Stefan Schulze authored
      ## What changes were proposed in this pull request?
      As of Scala 2.11.x there is no longer a org.scala-lang:jline version aligned to the scala version itself. Scala console now uses the plain jline:jline module. Spark's  dependency management did not reflect this change properly, causing Maven to pull in Jline via transitive dependency. Unfortunately Jline 2.12 contained a minor but very annoying bug rendering the shell almost useless for developers with german keyboard layout. This request contains the following chages:
      - Exclude transitive dependency 'jline:jline' from hive-exec module
      - Remove global properties 'jline.version' and 'jline.groupId'
      - Add both properties and dependency to 'scala-2.11' profile
      - Add explicit dependency on 'jline:jline' to  module 'spark-repl'
      
      ## How was this patch tested?
      - Running mvn dependency:tree and checking for correct Jline version 2.12.1
      - Running full builds with assembly and checking for jline-2.12.1.jar in 'lib' folder of generated tarball
      
      Author: Stefan Schulze <stefan.schulze@pentasys.de>
      
      Closes #14429 from stsc-pentasys/SPARK-16770.
      4775eb41
  8. Jul 31, 2016
    • Reynold Xin's avatar
      [SPARK-16812] Open up SparkILoop.getAddedJars · 7c27d075
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch makes SparkILoop.getAddedJars a public developer API. It is a useful function to get the list of jars added.
      
      ## How was this patch tested?
      N/A - this is a simple visibility change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14417 from rxin/SPARK-16812.
      7c27d075
  9. Jul 19, 2016
  10. Jul 14, 2016
    • jerryshao's avatar
      [SPARK-16540][YARN][CORE] Avoid adding jars twice for Spark running on yarn · 91575cac
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Currently when running spark on yarn, jars specified with --jars, --packages will be added twice, one is Spark's own file server, another is yarn's distributed cache, this can be seen from log:
      for example:
      
      ```
      ./bin/spark-shell --master yarn-client --jars examples/target/scala-2.11/jars/scopt_2.11-3.3.0.jar
      ```
      
      If specified the jar to be added is scopt jar, it will added twice:
      
      ```
      ...
      16/07/14 15:06:48 INFO Server: Started 5603ms
      16/07/14 15:06:48 INFO Utils: Successfully started service 'SparkUI' on port 4040.
      16/07/14 15:06:48 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.0.102:4040
      16/07/14 15:06:48 INFO SparkContext: Added JAR file:/Users/sshao/projects/apache-spark/examples/target/scala-2.11/jars/scopt_2.11-3.3.0.jar at spark://192.168.0.102:63996/jars/scopt_2.11-3.3.0.jar with timestamp 1468480008637
      16/07/14 15:06:49 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
      16/07/14 15:06:49 INFO Client: Requesting a new application from cluster with 1 NodeManagers
      16/07/14 15:06:49 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
      16/07/14 15:06:49 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
      16/07/14 15:06:49 INFO Client: Setting up container launch context for our AM
      16/07/14 15:06:49 INFO Client: Setting up the launch environment for our AM container
      16/07/14 15:06:49 INFO Client: Preparing resources for our AM container
      16/07/14 15:06:49 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
      16/07/14 15:06:50 INFO Client: Uploading resource file:/private/var/folders/tb/8pw1511s2q78mj7plnq8p9g40000gn/T/spark-a446300b-84bf-43ff-bfb1-3adfb0571a42/__spark_libs__6486179704064718817.zip -> hdfs://localhost:8020/user/sshao/.sparkStaging/application_1468468348998_0009/__spark_libs__6486179704064718817.zip
      16/07/14 15:06:51 INFO Client: Uploading resource file:/Users/sshao/projects/apache-spark/examples/target/scala-2.11/jars/scopt_2.11-3.3.0.jar -> hdfs://localhost:8020/user/sshao/.sparkStaging/application_1468468348998_0009/scopt_2.11-3.3.0.jar
      16/07/14 15:06:51 INFO Client: Uploading resource file:/private/var/folders/tb/8pw1511s2q78mj7plnq8p9g40000gn/T/spark-a446300b-84bf-43ff-bfb1-3adfb0571a42/__spark_conf__326416236462420861.zip -> hdfs://localhost:8020/user/sshao/.sparkStaging/application_1468468348998_0009/__spark_conf__.zip
      ...
      ```
      
      So here try to avoid adding jars to Spark's fileserver unnecessarily.
      
      ## How was this patch tested?
      
      Manually verified both in yarn client and cluster mode, also in standalone mode.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #14196 from jerryshao/SPARK-16540.
      91575cac
  11. Jul 11, 2016
    • Reynold Xin's avatar
      [SPARK-16477] Bump master version to 2.1.0-SNAPSHOT · ffcb6e05
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      After SPARK-16476 (committed earlier today as #14128), we can finally bump the version number.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14130 from rxin/SPARK-16477.
      ffcb6e05
  12. Jun 24, 2016
    • peng.zhang's avatar
      [SPARK-16125][YARN] Fix not test yarn cluster mode correctly in YarnClusterSuite · f4fd7432
      peng.zhang authored
      ## What changes were proposed in this pull request?
      
      Since SPARK-13220(Deprecate "yarn-client" and "yarn-cluster"), YarnClusterSuite doesn't test "yarn cluster" mode correctly.
      This pull request fixes it.
      
      ## How was this patch tested?
      Unit test
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: peng.zhang <peng.zhang@xiaomi.com>
      
      Closes #13836 from renozhang/SPARK-16125-test-yarn-cluster-mode.
      f4fd7432
  13. Jun 19, 2016
    • Prashant Sharma's avatar
      [SPARK-15942][REPL] Unblock `:reset` command in REPL. · 1b3a9b96
      Prashant Sharma authored
      ## What changes were proposed in this pull
      (Paste from JIRA issue.)
      As a follow up for SPARK-15697, I have following semantics for `:reset` command.
      On `:reset` we forget all that user has done but not the initialization of spark. To avoid confusion or make it more clear, we show the message `spark` and `sc` are not erased, infact they are in same state as they were left by previous operations done by the user.
      While doing above, somewhere I felt that this is not usually what reset means. But an accidental shutdown of a cluster can be very costly, so may be in that sense this is less surprising and still useful.
      
      ## How was this patch tested?
      
      Manually, by calling `:reset` command, by both altering the state of SparkContext and creating some local variables.
      
      Author: Prashant Sharma <prashant@apache.org>
      Author: Prashant Sharma <prashsh1@in.ibm.com>
      
      Closes #13661 from ScrapCodes/repl-reset-command.
      1b3a9b96
  14. Jun 16, 2016
    • Nezih Yigitbasi's avatar
      [SPARK-15782][YARN] Fix spark.jars and spark.yarn.dist.jars handling · 63470afc
      Nezih Yigitbasi authored
      When `--packages` is specified with spark-shell the classes from those packages cannot be found, which I think is due to some of the changes in SPARK-12343.
      
      Tested manually with both scala 2.10 and 2.11 repls.
      
      vanzin davies can you guys please review?
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      Author: Nezih Yigitbasi <nyigitbasi@netflix.com>
      
      Closes #13709 from nezihyigitbasi/SPARK-15782.
      63470afc
  15. Jun 15, 2016
    • Davies Liu's avatar
      a153e41c
    • Nezih Yigitbasi's avatar
      [SPARK-15782][YARN] Set spark.jars system property in client mode · 4df8df5c
      Nezih Yigitbasi authored
      ## What changes were proposed in this pull request?
      
      When `--packages` is specified with `spark-shell` the classes from those packages cannot be found, which I think is due to some of the changes in `SPARK-12343`. In particular `SPARK-12343` removes a line that sets the `spark.jars` system property in client mode, which is used by the repl main class to set the classpath.
      
      ## How was this patch tested?
      
      Tested manually.
      
      This system property is used by the repl to populate its classpath. If
      this is not set properly the classes for external packages cannot be
      found.
      
      tgravescs vanzin as you may be familiar with this part of the code.
      
      Author: Nezih Yigitbasi <nyigitbasi@netflix.com>
      
      Closes #13527 from nezihyigitbasi/repl-fix.
      4df8df5c
  16. Jun 13, 2016
    • Prashant Sharma's avatar
      [SPARK-15697][REPL] Unblock some of the useful repl commands. · 4134653e
      Prashant Sharma authored
      ## What changes were proposed in this pull request?
      
      Unblock some of the useful repl commands. like, "implicits", "javap", "power", "type", "kind". As they are useful and fully functional and part of scala/scala project, I see no harm in having them either.
      
      Verbatim paste form JIRA description.
      "implicits", "javap", "power", "type", "kind" commands in repl are blocked. However, they work fine in all cases I have tried. It is clear we don't support them as they are part of the scala/scala repl project. What is the harm in unblocking them, given they are useful ?
      In previous versions of spark we disabled these commands because it was difficult to support them without customization and the associated maintenance. Since the code base of scala repl was actually ported and maintained under spark source. Now that is not the situation and one can benefit from these commands in Spark REPL as much as in scala repl.
      
      ## How was this patch tested?
      Existing tests and manual, by trying out all of the above commands.
      
      P.S. Symantics of reset are to be discussed in a separate issue.
      
      Author: Prashant Sharma <prashsh1@in.ibm.com>
      
      Closes #13437 from ScrapCodes/SPARK-15697/repl-unblock-commands.
      4134653e
  17. Jun 09, 2016
    • Prashant Sharma's avatar
      [SPARK-15841][Tests] REPLSuite has incorrect env set for a couple of tests. · 83070cd1
      Prashant Sharma authored
      Description from JIRA.
      In ReplSuite, for a test that can be tested well on just local should not really have to start a local-cluster. And similarly a test is in-sufficiently run if it's actually fixing a problem related to a distributed run in environment with local run.
      
      Existing tests.
      
      Author: Prashant Sharma <prashsh1@in.ibm.com>
      
      Closes #13574 from ScrapCodes/SPARK-15841/repl-suite-fix.
      83070cd1
  18. Jun 02, 2016
    • hyukjinkwon's avatar
      [SPARK-15322][SQL][FOLLOWUP] Use the new long accumulator for old int accumulators. · 252417fa
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR corrects the remaining cases for using old accumulators.
      
      This does not change some old accumulator usages below:
      
      - `ImplicitSuite.scala` - Tests dedicated to old accumulator, for implicits with `AccumulatorParam`
      
      - `AccumulatorSuite.scala` -  Tests dedicated to old accumulator
      
      - `JavaSparkContext.scala` - For supporting old accumulators for Java API.
      
      - `debug.package.scala` - Usage with `HashSet[String]`. Currently, it seems no implementation for this. I might be able to write an anonymous class for this but I didn't because I think it is not worth writing a lot of codes only for this.
      
      - `SQLMetricsSuite.scala` - This uses the old accumulator for checking type boxing. It seems new accumulator does not require type boxing for this case whereas the old one requires (due to the use of generic).
      
      ## How was this patch tested?
      
      Existing tests cover this.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #13434 from HyukjinKwon/accum.
      252417fa
  19. May 31, 2016
  20. May 17, 2016
  21. May 04, 2016
  22. May 03, 2016
  23. Apr 28, 2016
  24. Apr 25, 2016
    • Andrew Or's avatar
      [SPARK-14828][SQL] Start SparkSession in REPL instead of SQLContext · 34336b62
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      ```
      Spark context available as 'sc' (master = local[*], app id = local-1461283768192).
      Spark session available as 'spark'.
      Welcome to
            ____              __
           / __/__  ___ _____/ /__
          _\ \/ _ \/ _ `/ __/  '_/
         /___/ .__/\_,_/_/ /_/\_\   version 2.0.0-SNAPSHOT
            /_/
      
      Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
      Type in expressions to have them evaluated.
      Type :help for more information.
      
      scala> sql("SHOW TABLES").collect()
      16/04/21 17:09:39 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
      16/04/21 17:09:39 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
      res0: Array[org.apache.spark.sql.Row] = Array([src,false])
      
      scala> sql("SHOW TABLES").collect()
      res1: Array[org.apache.spark.sql.Row] = Array([src,false])
      
      scala> spark.createDataFrame(Seq((1, 1), (2, 2), (3, 3)))
      res2: org.apache.spark.sql.DataFrame = [_1: int, _2: int]
      ```
      
      Hive things are loaded lazily.
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #12589 from andrewor14/spark-session-repl.
      34336b62
  25. Apr 22, 2016
    • Reynold Xin's avatar
      [SPARK-10001] Consolidate Signaling and SignalLogger. · c089c6f4
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This is a follow-up to #12557, with the following changes:
      
      1. Fixes some of the style issues.
      2. Merges Signaling and SignalLogger into a new class called SignalUtils. It was pretty confusing to have Signaling and Signal in one file, and it was also confusing to have two classes named Signaling and one called the other.
      3. Made logging registration idempotent.
      
      ## How was this patch tested?
      N/A.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12605 from rxin/SPARK-10001.
      c089c6f4
    • Jakob Odersky's avatar
      [SPARK-10001] [CORE] Interrupt tasks in repl with Ctrl+C · 80127935
      Jakob Odersky authored
      ## What changes were proposed in this pull request?
      
      Improve signal handling to allow interrupting running tasks from the REPL (with Ctrl+C).
      If no tasks are running or Ctrl+C is pressed twice, the signal is forwarded to the default handler resulting in the usual termination of the application.
      
      This PR is a rewrite of -- and therefore closes #8216 -- as per piaozhexiu's request
      
      ## How was this patch tested?
      Signal handling is not easily testable therefore no unit tests were added. Nevertheless, the new functionality is implemented in a best-effort approach, soft-failing in case signals aren't available on a specific OS.
      
      Author: Jakob Odersky <jakob@odersky.com>
      
      Closes #12557 from jodersky/SPARK-10001-sigint.
      80127935
  26. Apr 20, 2016
    • jerryshao's avatar
      [SPARK-14725][CORE] Remove HttpServer class · 90cbc82f
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      This proposal removes the class `HttpServer`, with the changing of internal file/jar/class transmission to RPC layer, currently there's no code using this `HttpServer`, so here propose to remove it.
      
      ## How was this patch tested?
      
      Unit test is verified locally.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #12526 from jerryshao/SPARK-14725.
      90cbc82f
  27. Apr 14, 2016
    • Wenchen Fan's avatar
      [SPARK-14558][CORE] In ClosureCleaner, clean the outer pointer if it's a REPL line object · 1d04c86f
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      When we clean a closure, if its outermost parent is not a closure, we won't clone and clean it as cloning user's objects is dangerous. However, if it's a REPL line object, which may carry a lot of unnecessary references(like hadoop conf, spark conf, etc.), we should clean it as it's not a user object.
      
      This PR improves the check for user's objects to exclude REPL line object.
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #12327 from cloud-fan/closure.
      1d04c86f
  28. Apr 12, 2016
  29. Apr 09, 2016
    • Reynold Xin's avatar
      [SPARK-14451][SQL] Move encoder definition into Aggregator interface · 520dde48
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      When we first introduced Aggregators, we required the user of Aggregators to (implicitly) specify the encoders. It would actually make more sense to have the encoders be specified by the implementation of Aggregators, since each implementation should have the most state about how to encode its own data type.
      
      Note that this simplifies the Java API because Java users no longer need to explicitly specify encoders for aggregators.
      
      ## How was this patch tested?
      Updated unit tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12231 from rxin/SPARK-14451.
      520dde48
  30. Apr 06, 2016
    • Marcelo Vanzin's avatar
      [SPARK-14134][CORE] Change the package name used for shading classes. · 21d5ca12
      Marcelo Vanzin authored
      The current package name uses a dash, which is a little weird but seemed
      to work. That is, until a new test tried to mock a class that references
      one of those shaded types, and then things started failing.
      
      Most changes are just noise to fix the logging configs.
      
      For reference, SPARK-8815 also raised this issue, although at the time it
      did not cause any issues in Spark, so it was not addressed.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #11941 from vanzin/SPARK-14134.
      21d5ca12
    • Marcelo Vanzin's avatar
      [SPARK-14446][TESTS] Fix ReplSuite for Scala 2.10. · 4901086f
      Marcelo Vanzin authored
      Just use the same test code as the 2.11 version, which seems to pass.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #12223 from vanzin/SPARK-14446.
      4901086f
  31. Apr 02, 2016
    • Dongjoon Hyun's avatar
      [MINOR][DOCS] Use multi-line JavaDoc comments in Scala code. · 4a6e78ab
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to fix all Scala-Style multiline comments into Java-Style multiline comments in Scala codes.
      (All comment-only changes over 77 files: +786 lines, −747 lines)
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12130 from dongjoon-hyun/use_multiine_javadoc_comments.
      4a6e78ab
  32. Mar 28, 2016
    • Dongjoon Hyun's avatar
      [SPARK-14102][CORE] Block `reset` command in SparkShell · b66aa900
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Spark Shell provides an easy way to use Spark in Scala environment. This PR adds `reset` command to a blocked list, also cleaned up according to the Scala coding style.
      ```scala
      scala> sc
      res0: org.apache.spark.SparkContext = org.apache.spark.SparkContext718fad24
      scala> :reset
      scala> sc
      <console>:11: error: not found: value sc
             sc
             ^
      ```
      If we blocks `reset`, Spark Shell works like the followings.
      ```scala
      scala> :reset
      reset: no such command.  Type :help for help.
      scala> :re
      re is ambiguous: did you mean :replay or :require?
      ```
      
      ## How was this patch tested?
      
      Manual. Run `bin/spark-shell` and type `:reset`.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11920 from dongjoon-hyun/SPARK-14102.
      b66aa900
  33. Mar 25, 2016
  34. Mar 21, 2016
    • Wenchen Fan's avatar
      [SPARK-13456][SQL] fix creating encoders for case classes defined in Spark shell · 43ebf7a9
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      case classes defined in REPL are wrapped by line classes, and we have a trick for scala 2.10 REPL to automatically register the wrapper classes to `OuterScope` so that we can use when create encoders.
      However, this trick doesn't work right after we upgrade to scala 2.11, and unfortunately the tests are only in scala 2.10, which makes this bug hidden until now.
      
      This PR moves the encoder tests to scala 2.11  `ReplSuite`, and fixes this bug by another approach(the previous trick can't port to scala 2.11 REPL): make `OuterScope` smarter that can detect classes defined in REPL and load the singleton of line wrapper classes automatically.
      
      ## How was this patch tested?
      
      the migrated encoder tests in `ReplSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11410 from cloud-fan/repl.
      43ebf7a9
Loading