Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.017Jan18171615141312131211109878765432131Dec30292829282728272625242322212221222120192019181716151415141314131211109109898767676765656543212130Nov29302928[SPARK-18917][SQL] Remove schema check in appending data[MINOR][SQL] Remove duplicate call of reset() function in CurrentOrigin.withOrigin()[SPARK-19239][PYSPARK] Check parameters whether equals None when specify the column in jdbc API[SPARK-19129][SQL] SessionCatalog: Disallow empty part col values in partition spec[SPARK-19129][SQL] SessionCatalog: Disallow empty part col values in partition spec[SPARK-19065][SQL] Don't inherit expression id in dropDuplicates[SPARK-19065][SQL] Don't inherit expression id in dropDuplicates[SPARK-19019] [PYTHON] Fix hijacked `collections.namedtuple` and port cloudpickle changes for PySpark to work with Python 3.6.0[SPARK-19019] [PYTHON] Fix hijacked `collections.namedtuple` and port cloudpickle changes for PySpark to work with Python 3.6.0[SPARK-19179][YARN] Change spark.yarn.access.namenodes config and update docs[SPARK-3249][DOC] Fix links in ScalaDoc that cause warning messages in `sbt/sbt unidoc`[SPARK-19219][SQL] Fix Parquet log output defaults[SPARK-19240][SQL][TEST] add test for setting location for managed table[MINOR][YARN] Move YarnSchedulerBackendSuite to resource-managers/yarn directory.[SPARK-19148][SQL] do not expose the external table concept in Catalog[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset from JobScheduler.jobSets[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset from JobScheduler.jobSets[SPARK-18828][SPARKR] Refactor scripts for R[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows[SPARK-19066][SPARKR] SparkR LDA doesn't set optimizer correctly[SPARK-18801][SQL][FOLLOWUP] Alias the view with its child[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet[SPARK-19092][SQL][BACKPORT-2.1] Save() API of DataFrameWriter should not scan all the saved files #16481[SPARK-19120] Refresh Metadata Cache After Loading Hive Tables[SPARK-19120] Refresh Metadata Cache After Loading Hive Tables[SPARK-19206][DOC][DSTREAM] Fix outdated parameter descriptions in kafka010[SPARK-18971][CORE] Upgrade Netty to 4.0.43.Final[MINOR][DOC] Document local[*,F] master modes[SPARK-19042] spark executor can't download the jars when uber jar's http url contains any query strings[SPARK-19207][SQL] LocalSparkSession should use Slf4JLoggerFactory.INSTANCE[SPARK-19151][SQL] DataFrameWriter.saveAsTable support hive overwrite[SPARK-19221][PROJECT INFRA][R] Add winutils binaries to the path in AppVeyor tests for Hadoop libraries to call native codes properly[SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn[SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter[SPARK-19178][SQL] convert string of large numbers to int should return null[SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error
Loading