Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.07Jul878767676545432130Jun1Jul30Jun2930293029282928292827282726252423242322212221201920191819181615161516141514151413141312131213121112111098765432131May30313029282726272625262524252425242322232221201918171817161516151615141312[SPARK-21069][SS][DOCS] Add rate source to programming guide.[SPARK-21069][SS][DOCS] Add rate source to programming guide.[SPARK-20379][CORE] Allow SSL config to reference env variables.[SPARK-21281][SQL] Use string types by default if array and map have no argument[SPARK-21100][SQL] Add summary method as alternative to describe that gives quartiles similar to Pandas[SPARK-21336] Revise rand comparison in BatchEvalPythonExecSuite[SPARK-19358][CORE] LiveListenerBus shall log the event name when dropping them due to a fully filled queue[SPARK-21335][SQL] support un-aliased subquery[SPARK-21285][ML] VectorAssembler reports the column name of unsupported data type[SPARK-21313][SS] ConsoleSink's string representation[SPARK-20703][SQL][FOLLOW-UP] Associate metrics with data writes onto DataFrameWriter operations[SPARK-21217][SQL] Support ColumnVector.Array.to<type>Array()[SPARK-21327][SQL][PYSPARK] ArrayConstructor should handle an array of typecode 'l' as long rather than int in Python 2.[SPARK-21326][SPARK-21066][ML] Use TextFileFormat in LibSVMFileFormat and allow multiple input paths for determining numFeatures[SPARK-21329][SS] Make EventTimeWatermarkExec explicitly UnaryExecNode[SPARK-20946][SQL] Do not update conf for existing SparkContext in SparkSession.getOrCreate[SPARK-21267][SS][DOCS] Update Structured Streaming Documentation[SPARK-21267][SS][DOCS] Update Structured Streaming Documentation[SPARK-21323][SQL] Rename plans.logical.statsEstimation.Range to ValueInterval[SPARK-21204][SQL] Add support for Scala Set collection types in serialization[SPARK-21228][SQL] InSet incorrect handling of structs[SPARK-20950][CORE] add a new config to diskWriteBufferSize which is hard coded before[SPARK-21273][SQL][FOLLOW-UP] Add missing test cases back and revise code style[SPARK-21324][TEST] Improve statistics test suites[SPARK-20703][SQL] Associate metrics with data writes onto DataFrameWriter operations[SPARK-21012][SUBMIT] Add glob support for resources adding to Spark[SS][MINOR] Fix flaky test in DatastreamReaderWriterSuite. temp checkpoint dir should be deleted[SS][MINOR] Fix flaky test in DatastreamReaderWriterSuite. temp checkpoint dir should be deleted[SPARK-21312][SQL] correct offsetInBytes in UnsafeRow.writeToStream[SPARK-21312][SQL] correct offsetInBytes in UnsafeRow.writeToStream[SPARK-21312][SQL] correct offsetInBytes in UnsafeRow.writeToStream[SPARK-21308][SQL] Remove SQLConf parameters from the optimizer[SPARK-21248][SS] The clean up codes in StreamExecution should not be interrupted[SPARK-21278][PYSPARK] Upgrade to Py4J 0.10.6[SPARK-21307][SQL] Remove SQLConf parameters from the parser-related classes.[SPARK-19439][PYSPARK][SQL] PySpark's registerJavaFunction Should Support UDAFs[SPARK-20858][DOC][MINOR] Document ListenerBus event queue size[SPARK-21286][TEST] Modified StorageTabSuite unit test[SPARK-20383][SQL] Supporting Create [temporary] Function with the keyword 'OR REPLACE' and 'IF NOT EXISTS'[SPARK-16167][SQL] RowEncoder should preserve array/map type nullability.
Loading