Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.018Sep1718161514151415141312111098787676565432131Aug30292827262524232223222122212019181718171617161516151415141312111211109898787876543212131Jul303130292827262524232120191817181715141312111091098987878767676545432130Jun1Jul30Jun[SPARK-22003][SQL] support array column in vectorized reader with UDF[SPARK-22047][TEST] ignore HiveExternalCatalogVersionsSuite[SPARK-22047][TEST] ignore HiveExternalCatalogVersionsSuite[SPARK-21113][CORE] Read ahead input stream to amortize disk IO cost …[SPARK-22043][PYTHON] Improves error message for show_profiles and dump_profiles[SPARK-22043][PYTHON] Improves error message for show_profiles and dump_profiles[SPARK-22043][PYTHON] Improves error message for show_profiles and dump_profiles[SPARK-21953] Show both memory and disk bytes spilled if either is present[SPARK-21953] Show both memory and disk bytes spilled if either is present[SPARK-21953] Show both memory and disk bytes spilled if either is present[SPARK-21985][PYSPARK] PairDeserializer is broken for double-zipped RDDs[SPARK-21985][PYSPARK] PairDeserializer is broken for double-zipped RDDs[SPARK-21985][PYSPARK] PairDeserializer is broken for double-zipped RDDs[SPARK-22032][PYSPARK] Speed up StructType conversion[SPARK-21967][CORE] org.apache.spark.unsafe.types.UTF8String#compareTo Should Compare 8 Bytes at a Time for Better Performance[SPARK-22017] Take minimum of all watermark execs in StreamExecution.[SPARK-15689][SQL] data source v2 read path[SPARK-21958][ML] Word2VecModel save: transform data in the cluster[SPARK-21987][SQL] fix a compatibility issue of sql event logs[SPARK-22002][SQL] Read JDBC table use custom schema support specify partial fields.[SPARK-21902][CORE] Print root cause for BlockManager#doPut[SPARK-22018][SQL] Preserve top-level alias metadata when collapsing projects[SPARK-21513][SQL][FOLLOWUP] Allow UDF to_json support converting MapType to json for PySpark and SparkR[SPARK-21988] Add default stats to StreamingExecutionRelation.[SPARK-17642][SQL][FOLLOWUP] drop test tables and improve comments[SPARK-21922] Fix duration always updating when task failed but status is still RUN…[SPARK-4131][FOLLOW-UP] Support "Writing data into the filesystem from queries"[SPARK-18608][ML][FOLLOWUP] Fix double caching for PySpark OneVsRest.[SPARK-18608][ML][FOLLOWUP] Fix double caching for PySpark OneVsRest.[MINOR][DOC] Add missing call of `update()` in examples of PeriodicGraphCheckpointer & PeriodicRDDCheckpointer[SPARK-21854] Added LogisticRegressionTrainingSummary for MultinomialLogisticRegression in Python API[MINOR][SQL] Only populate type metadata for required types such as CHAR/VARCHAR.[SPARK-21973][SQL] Add an new option to filter queries in TPC-DSPreparing development version 2.1.3-SNAPSHOTPreparing Spark release v2.1.2-rc1v2.1.2-rc1v2.1.2-rc1[SPARK-20427][SQL] Read JDBC table use custom schema[SPARK-4131] Merge HiveTmpFile.scala to SaveAsHiveFile.scala[SPARK-21980][SQL] References in grouping functions should be indexed with semanticEquals[SPARK-21980][SQL] References in grouping functions should be indexed with semanticEquals[SPARK-21970][CORE] Fix Redundant Throws Declarations in Java Codebase
Loading