Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.018Jan1718171615141312131211109878765432131Dec30292829282728272625242322212221222120192019181716151415141314131211109109898767676765656543212130Nov293029[SPARK-14975][ML] Fixed GBTClassifier to predict probability per training instance and fixed interfaces[SPARK-19182][DSTREAM] Optimize the lock in StreamingJobProgressListener to not block UI when generating Streaming jobs[SPARK-19168][STRUCTURED STREAMING] StateStore should be aborted upon error[SPARK-19168][STRUCTURED STREAMING] StateStore should be aborted upon error[SPARK-19113][SS][TESTS] Ignore StreamingQueryException thrown from awaitInitialization to avoid breaking tests[SPARK-19113][SS][TESTS] Ignore StreamingQueryException thrown from awaitInitialization to avoid breaking tests[SPARK-18113] Use ask to replace askWithRetry in canCommit and make receiver idempotent.[SPARK-19231][SPARKR] add error handling for download and untar for Spark release[SPARK-19231][SPARKR] add error handling for download and untar for Spark release[SPARK-19223][SQL][PYSPARK] Fix InputFileBlockHolder for datasources which are based on HadoopRDD or NewHadoopRDD[SPARK-19024][SQL] Implement new approach to write a permanent view[SPARK-18782][BUILD] Bump Hadoop 2.6 version to use Hadoop 2.6.5[SPARK-19227][SPARK-19251] remove unused imports and outdated comments[SPARK-18243][SQL] Port Hive writing to use FileFormat interface[SPARK-19066][SPARKR][BACKPORT-2.1] LDA doesn't set optimizer correctly[SPARK-18206][ML] Add instrumentation for MLP,NB,LDA,AFT,GLM,Isotonic,LiR[SPARK-13721][SQL] Support outer generators in DataFrame API[SPARK-18917][SQL] Remove schema check in appending data[MINOR][SQL] Remove duplicate call of reset() function in CurrentOrigin.withOrigin()[SPARK-19239][PYSPARK] Check parameters whether equals None when specify the column in jdbc API[SPARK-19129][SQL] SessionCatalog: Disallow empty part col values in partition spec[SPARK-19129][SQL] SessionCatalog: Disallow empty part col values in partition spec[SPARK-19065][SQL] Don't inherit expression id in dropDuplicates[SPARK-19065][SQL] Don't inherit expression id in dropDuplicates[SPARK-19019] [PYTHON] Fix hijacked `collections.namedtuple` and port cloudpickle changes for PySpark to work with Python 3.6.0[SPARK-19019] [PYTHON] Fix hijacked `collections.namedtuple` and port cloudpickle changes for PySpark to work with Python 3.6.0[SPARK-19179][YARN] Change spark.yarn.access.namenodes config and update docs[SPARK-3249][DOC] Fix links in ScalaDoc that cause warning messages in `sbt/sbt unidoc`[SPARK-19219][SQL] Fix Parquet log output defaults[SPARK-19240][SQL][TEST] add test for setting location for managed table[MINOR][YARN] Move YarnSchedulerBackendSuite to resource-managers/yarn directory.[SPARK-19148][SQL] do not expose the external table concept in Catalog[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset from JobScheduler.jobSets[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset from JobScheduler.jobSets[SPARK-18828][SPARKR] Refactor scripts for R[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows[SPARK-19066][SPARKR] SparkR LDA doesn't set optimizer correctly[SPARK-18801][SQL][FOLLOWUP] Alias the view with its child[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet
Loading