Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
spark
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
cs525-sp18-g07
spark
Repository graph
Repository graph
You can move around the graph by using the arrow keys.
0f7c9e84e0d00813bf56712097677add5657f19f
Select Git revision
Branches
4
baesline-eviction-with-logging
evict-by-size
master
default
protected
working
Tags
20
v2.3.0
v2.3.0-rc4
v2.3.0-rc3
v2.3.0-rc2
v2.3.0-rc1
v2.2.1
v2.2.1-rc2
v2.2.1-rc1
v2.1.2
v2.1.2-rc4
v2.1.2-rc3
v2.1.2-rc2
v2.1.2-rc1
v2.2.0
v2.1.1
v2.1.0
v2.0.2
v1.6.3
v2.0.1
v2.0.0
24 results
Begin with the selected commit
Created with Raphaël 2.2.0
23
Nov
22
21
20
19
20
19
18
17
16
17
16
15
14
13
12
11
12
11
10
9
8
9
8
7
8
7
6
5
4
3
2
1
2
1
31
Oct
30
29
28
27
26
27
26
27
26
25
26
25
24
25
24
23
22
21
20
19
18
19
18
17
16
17
16
15
14
[SPARK-18510][SQL] Follow up to address comments in #15951
[SPARK-18510] Fix data corruption from inferred partition column dataTypes
[SPARK-18510] Fix data corruption from inferred partition column dataTypes
[SPARK-18050][SQL] do not create default database if it already exists
[SPARK-18050][SQL] do not create default database if it already exists
[SPARK-18522][SQL] Explicit contract for column stats serialization
[SPARK-18522][SQL] Explicit contract for column stats serialization
[SPARK-18557] Downgrade confusing memory leak warning message
[SPARK-18557] Downgrade confusing memory leak warning message
[SPARK-18545][SQL] Verify number of hive client RPCs in PartitionedTablePerfStatsSuite
[SPARK-18053][SQL] compare unsafe and safe complex-type values correctly
[SPARK-18053][SQL] compare unsafe and safe complex-type values correctly
[SPARK-18545][SQL] Verify number of hive client RPCs in PartitionedTablePerfStatsSuite
[SPARK-18073][DOCS][WIP] Migrate wiki to spark.apache.org web site
[SPARK-18073][DOCS][WIP] Migrate wiki to spark.apache.org web site
[SPARK-18179][SQL] Throws analysis exception with a proper message for unsupported argument types in reflect/java_method function
[SPARK-18179][SQL] Throws analysis exception with a proper message for unsupported argument types in reflect/java_method function
[SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data
[SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data
[SPARK-18530][SS][KAFKA] Change Kafka timestamp column type to TimestampType
[SPARK-18530][SS][KAFKA] Change Kafka timestamp column type to TimestampType
[SPARK-18533] Raise correct error upon specification of schema for datasource tables created using CTAS
[SPARK-18533] Raise correct error upon specification of schema for datasource tables created using CTAS
[SPARK-16803][SQL] SaveAsTable does not work when target table is a Hive serde table
[SPARK-16803][SQL] SaveAsTable does not work when target table is a Hive serde table
[SPARK-18373][SPARK-18529][SS][KAFKA] Make failOnDataLoss=false work with Spark jobs
[SPARK-18373][SPARK-18529][SS][KAFKA] Make failOnDataLoss=false work with Spark jobs
[SPARK-18465] Add 'IF EXISTS' clause to 'UNCACHE' to not throw exceptions when table doesn't exist
[SPARK-18465] Add 'IF EXISTS' clause to 'UNCACHE' to not throw exceptions when table doesn't exist
[SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable once
[SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable once
[SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result
[SPARK-18504][SQL] Scalar subquery with extra group by columns returning incorrect result
[SPARK-18519][SQL] map type can not be used in EqualTo
[SPARK-18519][SQL] map type can not be used in EqualTo
[SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation
[SPARK-18447][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across Python API documentation
[SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation
[SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation
[SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package.
Loading