Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.027May262526252425242524232223222120191817181716151615161514131211121110910910910987876543232130Apr292827262526252425242322212021201920191819181716151413121312111011[SPARK-19659][CORE][FOLLOW-UP] Fetch big blocks to disk when shuffle-read[SPARK-10643][CORE] Make spark-submit download remote files to local in client mode[SPARK-10643][CORE] Make spark-submit download remote files to local in client mode[SPARK-20873][SQL] Improve the error message for unsupported Column Type[SPARK-20694][DOCS][SQL] Document DataFrameWriter partitionBy, bucketBy and sortBy in SQL guide[SPARK-20694][DOCS][SQL] Document DataFrameWriter partitionBy, bucketBy and sortBy in SQL guide[SPARK-20014] Optimize mergeSpillsWithFileStream method[SPARK-20844] Remove experimental from Structured Streaming APIs[SPARK-20844] Remove experimental from Structured Streaming APIs[SPARK-19372][SQL] Fix throwing a Java exception at df.fliter() due to 64KB bytecode size limit[SPARK-20393][WEBU UI] Strengthen Spark to prevent XSS vulnerabilities[SPARK-20835][CORE] It should exit directly when the --total-executor-cores parameter is setted less than 0 when submit a application[SPARK-20887][CORE] support alternative keys in ConfigBuilder[MINOR] document edge case of updateFunc usage[SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo[SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo[SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo[SPARK-20849][DOC][SPARKR] Document R DecisionTree[SPARK-20392][SQL] Set barrier to prevent re-entering a tree[SPARK-14659][ML] RFormula consistent with R when handling strings[SPARK-20775][SQL] Added scala support from_json[SPARK-20888][SQL][DOCS] Document change of default setting of spark.sql.hive.caseSensitiveInferenceMode[SPARK-20888][SQL][DOCS] Document change of default setting of spark.sql.hive.caseSensitiveInferenceMode[SPARK-20874][EXAMPLES] Add Structured Streaming Kafka Source to examples project[SPARK-20874][EXAMPLES] Add Structured Streaming Kafka Source to examples project[SPARK-20874][EXAMPLES] Add Structured Streaming Kafka Source to examples project[SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test failures/the invalid path check for sc.addJar on Windows[SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test failures/the invalid path check for sc.addJar on Windows[SPARK-20741][SPARK SUBMIT] Added cleanup of JARs archive generated by SparkSubmit[SPARK-20741][SPARK SUBMIT] Added cleanup of JARs archive generated by SparkSubmit[SPARK-20768][PYSPARK][ML] Expose numPartitions (expert) param of PySpark FPGrowth.[SPARK-20768][PYSPARK][ML] Expose numPartitions (expert) param of PySpark FPGrowth.[SPARK-19281][FOLLOWUP][ML] Minor fix for PySpark FPGrowth.[SPARK-19281][FOLLOWUP][ML] Minor fix for PySpark FPGrowth.[SPARK-19659] Fetch big blocks to disk when shuffle-read.[SPARK-19659] Fetch big blocks to disk when shuffle-read.[SPARK-20250][CORE] Improper OOM error when a task been killed while spilling data[SPARK-20250][CORE] Improper OOM error when a task been killed while spilling data[SPARK-20250][CORE] Improper OOM error when a task been killed while spilling data[SPARK-20848][SQL][FOLLOW-UP] Shutdown the pool after reading parquet files
Loading