Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • baesline-eviction-with-logging
  • evict-by-size
  • master default protected
  • working
  • v2.3.0
  • v2.3.0-rc4
  • v2.3.0-rc3
  • v2.3.0-rc2
  • v2.3.0-rc1
  • v2.2.1
  • v2.2.1-rc2
  • v2.2.1-rc1
  • v2.1.2
  • v2.1.2-rc4
  • v2.1.2-rc3
  • v2.1.2-rc2
  • v2.1.2-rc1
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.2
  • v1.6.3
  • v2.0.1
  • v2.0.0
24 results
Created with Raphaël 2.2.013Jan12131211109878765432131Dec30292829282728272625242322212221222120192019181716151415141314131211109109898767676765656543212130Nov29302928292827[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter[SPARK-19178][SQL] convert string of large numbers to int should return null[SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error[SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby errorFix missing close-parens for In filter's toStringFix missing close-parens for In filter's toString[SPARK-19178][SQL] convert string of large numbers to int should return null[SPARK-19142][SPARKR] spark.kmeans should take seed, initSteps, and tol as parameters[SPARK-19092][SQL] Save() API of DataFrameWriter should not scan all the saved files[SPARK-19110][MLLIB][FOLLOWUP] Add a unit test for testing logPrior and logLikelihood of DistributedLDAModel in MLLIB[SPARK-17237][SQL] Remove backticks in a pivot result schema[SPARK-17237][SQL] Remove backticks in a pivot result schema[SPARK-12757][CORE] lower "block locks were not released" log to info level[SPARK-19055][SQL][PYSPARK] Fix SparkSession initialization when SparkContext is stopped[SPARK-19055][SQL][PYSPARK] Fix SparkSession initialization when SparkContext is stopped[SPARK-18969][SQL] Support grouping by nondeterministic expressions[SPARK-18969][SQL] Support grouping by nondeterministic expressions[SPARK-18857][SQL] Don't use `Iterator.duplicate` for `incrementalCollect` in Thrift Server[SPARK-19183][SQL] Add deleteWithJob hook to internal commit protocol API[SPARK-19164][PYTHON][SQL] Remove unused UserDefinedFunction._broadcast[SPARK-19158][SPARKR][EXAMPLES] Fix ml.R example fails due to lack of e1071 package.[SPARK-19158][SPARKR][EXAMPLES] Fix ml.R example fails due to lack of e1071 package.[SPARK-16848][SQL] Check schema validation for user-specified schema in jdbc and table APIs[SPARK-19132][SQL] Add test cases for row size estimation and aggregate estimation[SPARK-19149][SQL] Follow-up: simplify cache implementation.[SPARK-18801][SQL] Support resolve a nested view[SPARK-17568][CORE][DEPLOY] Add spark-submit option to override ivy settings used to resolve packages/artifacts[SPARK-19130][SPARKR] Support setting literal value as column implicitly[SPARK-19130][SPARKR] Support setting literal value as column implicitly[SPARK-19021][YARN] Generailize HDFSCredentialProvider to support non HDFS security filesystems[SPARK-19149][SQL] Unify two sets of statistics in LogicalPlan[SPARK-19157][SQL] should be able to change spark.sql.runSQLOnFiles at runtime[SPARK-19133][SPARKR][ML][BACKPORT-2.1] fix glm for Gamma, clarify glm family supported[SPARK-19140][SS] Allow update mode for non-aggregation streaming queries[SPARK-19140][SS] Allow update mode for non-aggregation streaming queries[SPARK-18997][CORE] Recommended upgrade libthrift to 0.9.3[SPARK-18997][CORE] Recommended upgrade libthrift to 0.9.3[SPARK-19133][SPARKR][ML] fix glm for Gamma, clarify glm family supported[SPARK-19113][SS][TESTS] Set UncaughtExceptionHandler in onQueryStarted to ensure catching fatal errors during query initialization
Loading