Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.017Mar161514151415141312109876543212128Feb1Mar28Feb2728272625242524232221201918171615141312111098787676765432131Jan3029282726252423242322232122212021201920191817181716151413[SPARK-19986][TESTS] Make pyspark.streaming.tests.CheckpointTests more stable[SPARK-19986][TESTS] Make pyspark.streaming.tests.CheckpointTests more stable[SPARK-19721][SS][BRANCH-2.1] Good error message for version mismatch in log files[SPARK-13369] Add config for number of consecutive fetch failures[SPARK-19882][SQL] Pivot with null as a distinct pivot value throws NPE[SPARK-19765][SPARK-18549][SPARK-19093][SPARK-19736][BACKPORT-2.1][SQL] Backport Three Cache-related PRs to Spark 2.1[SPARK-19987][SQL] Pass all filters into FileIndex[SPARK-19635][ML] DataFrame-based API for chi square test[SPARK-19721][SS] Good error message for version mismatch in log files[SPARK-19945][SQL] add test suite for SessionCatalog with HiveExternalCatalog[SPARK-19329][SQL][BRANCH-2.1] Reading from or writing to a datasource table with a non pre-existing location should succeed[SPARK-19946][TESTING] DebugFilesystem.assertNoOpenStreams should report the open streams to help debugging[SPARK-13568][ML] Create feature transformer to impute missing values[SPARK-19830][SQL] Add parseTableSchema API to ParserInterface[SPARK-19751][SQL] Throw an exception if bean class has one's own class in fields[SPARK-19961][SQL][MINOR] unify a erro msg when drop databse for HiveExternalCatalog and InMemoryCatalog[SPARK-19948] Document that saveAsTable uses catalog as source of truth for table existence.[SPARK-19931][SQL] InMemoryTableScanExec should rewrite output partitioning and ordering when aliasing output attributes[SPARK-18066][CORE][TESTS] Add Pool usage policies test coverage for FIFO & FAIR Schedulers[MINOR][CORE] Fix a info message of `prunePartitions`[SPARK-19960][CORE] Move `SparkHadoopWriter` to `internal/io/`[SPARK-13450] Introduce ExternalAppendOnlyUnsafeRowArray. Change CartesianProductExec, SortMergeJoin, WindowExec to use it[SPARK-19872] [PYTHON] Use the correct deserializer for RDD construction for coalesce/repartition[SPARK-19872] [PYTHON] Use the correct deserializer for RDD construction for coalesce/repartition[SPARK-19944][SQL] Move SQLConf from sql/core to sql/catalyst (branch-2.1)[SPARK-19889][SQL] Make TaskContext callbacks thread safe[SPARK-19877][SQL] Restrict the nested level of a view[SPARK-19817][SS] Make it clear that `timeZone` is a general option in DataStreamReader/Writer[SPARK-18112][SQL] Support reading data from Hive 2.1 metastore[SPARK-19828][R] Support array type in from_json in R[SPARK-19887][SQL] dynamic partition keys can be null or empty string[SPARK-19918][SQL] Use TextFileFormat in implementation of TextInputJsonDataSource[SPARK-19887][SQL] dynamic partition keys can be null or empty string[SPARK-19817][SQL] Make it clear that `timeZone` option is a general option in DataFrameReader/Writer.[SPARK-18966][SQL] NOT IN subquery with correlated expressions may return incorrect result[SPARK-19933][SQL] Do not change output of a subquery[SPARK-19933][SQL] Do not change output of a subquery[SPARK-19923][SQL] Remove unnecessary type conversions per call in Hive[SPARK-18961][SQL] Support `SHOW TABLE EXTENDED ... PARTITION` statement[SPARK-11569][ML] Fix StringIndexer to handle null value properly
Loading