Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.09Dec898767676765656543212130Nov29302928292827262524232221201920191817161716151413121112111098987876543[SPARK-17822][R] Make JVMObjectTracker a member variable of RBackend[SPARK-17822][R] Make JVMObjectTracker a member variable of RBackend[MINOR][CORE][SQL][DOCS] Typo fixes[MINOR][CORE][SQL][DOCS] Typo fixes[SPARK-18637][SQL] Stateful UDF should be considered as nondeterministic[SPARK-18637][SQL] Stateful UDF should be considered as nondeterministicCopy pyspark and SparkR packages to latest release dir tooCopy pyspark and SparkR packages to latest release dir tooCopy the SparkR source package with LFTPCopy the SparkR source package with LFTP[SPARK-18697][BUILD] Upgrade sbt plugins[SPARK-18349][SPARKR] Update R API documentation on ml model summary[SPARK-18349][SPARKR] Update R API documentation on ml model summary[SPARKR][PYSPARK] Fix R source package name to match Spark version. Remove pip tar.gz from distribution[SPARKR][PYSPARK] Fix R source package name to match Spark version. Remove pip tar.gz from distribution[SPARK-18774][CORE][SQL] Ignore non-existing files when ignoreCorruptFiles is enabled (branch 2.1)[SPARK-18776][SS] Make Offset for FileStreamSource corrected formatted in json[SPARK-18776][SS] Make Offset for FileStreamSource corrected formatted in json[SPARK-18590][SPARKR] Change the R source build to Hadoop 2.6[SPARK-18590][SPARKR] Change the R source build to Hadoop 2.6Close stale PRs.[SPARK-18760][SQL] Consistent format specification for FileFormats[SPARK-18760][SQL] Consistent format specification for FileFormats[SPARK-18751][CORE] Fix deadlock when SparkContext.stop is called in Utils.tryOrStopSparkContext[SPARK-18751][CORE] Fix deadlock when SparkContext.stop is called in Utils.tryOrStopSparkContext[SPARK-18590][SPARKR] build R source package when making distribution[SPARK-18590][SPARKR] build R source package when making distribution[SPARK-16589] [PYTHON] Chained cartesian produces incorrect number of records[SPARK-16589] [PYTHON] Chained cartesian produces incorrect number of records[SPARK-8617][WEBUI] HistoryServer: Include in-progress files during cleanup[SPARK-18662][HOTFIX] Add new resource-managers directories to SparkLauncher.[SPARK-18667][PYSPARK][SQL] Change the way to group row in BatchEvalPythonExec so input_file_name function can work with UDF in pyspark[SPARK-18667][PYSPARK][SQL] Change the way to group row in BatchEvalPythonExec so input_file_name function can work with UDF in pyspark[SPARK-18718][TESTS] Skip some test failures due to path length limitation and fix tests to pass on Windows[SPARK-18325][SPARKR][ML] SparkR ML wrappers example code and user guide[SPARK-18325][SPARKR][ML] SparkR ML wrappers example code and user guide[SPARK-18774][CORE][SQL] Ignore non-existing files when ignoreCorruptFiles is enabledClose stale pull requests.Preparing development version 2.1.1-SNAPSHOTPreparing Spark release v2.1.0-rc2
Loading