Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
spark
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
cs525-sp18-g07
spark
Repository graph
Repository graph
You can move around the graph by using the arrow keys.
226d38840c8d3f40639715d755df6fb4fee2715f
Select Git revision
Branches
4
baesline-eviction-with-logging
evict-by-size
master
default
protected
working
Tags
20
v2.3.0
v2.3.0-rc4
v2.3.0-rc3
v2.3.0-rc2
v2.3.0-rc1
v2.2.1
v2.2.1-rc2
v2.2.1-rc1
v2.1.2
v2.1.2-rc4
v2.1.2-rc3
v2.1.2-rc2
v2.1.2-rc1
v2.2.0
v2.1.1
v2.1.0
v2.0.2
v1.6.3
v2.0.1
v2.0.0
24 results
Begin with the selected commit
Created with Raphaël 2.2.0
10
Mar
9
8
7
6
5
4
3
2
1
2
1
28
Feb
1
Mar
28
Feb
27
28
27
26
25
24
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
8
7
6
7
6
7
6
5
4
3
2
1
31
Jan
30
29
28
27
26
25
24
23
24
23
22
23
21
22
21
20
21
20
19
20
19
18
17
18
17
16
15
14
13
12
13
12
11
10
9
8
[SPARK-19723][SQL] create datasource table with an non-existent location should work
[SPARK-19611][SQL] Introduce configurable table schema inference
[SPARK-19893][SQL] should not run DataFrame set oprations with map type
[SPARK-19893][SQL] should not run DataFrame set oprations with map type
[SPARK-19905][SQL] Bring back Dataset.inputFiles for Hive SerDe tables
[SPARK-19611][SQL] Preserve metastore field order when merging inferred schema
[SPARK-17979][SPARK-14453] Remove deprecated SPARK_YARN_USER_ENV and SPARK_JAVA_OPTS
[SPARK-19620][SQL] Fix incorrect exchange coordinator id in the physical plan
[SPARK-19786][SQL] Facilitate loop optimizations in a JIT compiler regarding range()
[SPARK-19891][SS] Await Batch Lock notified on stream execution exit
[SPARK-19891][SS] Await Batch Lock notified on stream execution exit
[SPARK-19008][SQL] Improve performance of Dataset.map by eliminating boxing/unboxing
[SPARK-19886] Fix reportDataLoss if statement in SS KafkaSource
[SPARK-19886] Fix reportDataLoss if statement in SS KafkaSource
[SPARK-19611][SQL] Introduce configurable table schema inference
[SPARK-12334][SQL][PYSPARK] Support read from multiple input paths for orc file in DataFrameReader.orc
[SPARK-19861][SS] watermark should not be a negative time.
[SPARK-19861][SS] watermark should not be a negative time.
[SPARK-19715][STRUCTURED STREAMING] Option to Strip Paths in FileSource
[SPARK-19793] Use clock.getTimeMillis when mark task as finished in TaskSetManager.
[SPARK-19757][CORE] DriverEndpoint#makeOffers race against CoarseGrainedSchedulerBackend#killExecutors
[SPARK-19561][SQL] add int case handling for TimestampType
[SPARK-19561][SQL] add int case handling for TimestampType
[SPARK-19763][SQL] qualified external datasource table location stored in catalog
[SPARK-19859][SS][FOLLOW-UP] The new watermark should override the old one.
[SPARK-19859][SS][FOLLOW-UP] The new watermark should override the old one.
[SPARK-19874][BUILD] Hide API docs for org.apache.spark.sql.internal
[SPARK-19874][BUILD] Hide API docs for org.apache.spark.sql.internal
[SPARK-19235][SQL][TESTS] Enable Test Cases in DDLSuite with Hive Metastore
[MINOR][SQL] The analyzer rules are fired twice for cases when AnalysisException is raised from analyzer.
[MINOR][SQL] The analyzer rules are fired twice for cases when AnalysisException is raised from analyzer.
Revert "[SPARK-19413][SS] MapGroupsWithState for arbitrary stateful operations for branch-2.1"
[SPARK-19813] maxFilesPerTrigger combo latestFirst may miss old files in combination with maxFileAge in FileStreamSource
[SPARK-19813] maxFilesPerTrigger combo latestFirst may miss old files in combination with maxFileAge in FileStreamSource
[SPARK-15463][SQL] Add an API to load DataFrame from Dataset[String] storing CSV
[SPARK-19540][SQL] Add ability to clone SparkSession wherein cloned session has an identical copy of the SessionState
[SPARK-19858][SS] Add output mode to flatMapGroupsWithState and disallow invalid cases
[SPARK-19727][SQL] Fix for round function that modifies original column
[SPARK-19864][SQL][TEST] provide a makeQualifiedPath functions to optimize some code
[SPARK-19843][SQL][FOLLOWUP] Classdoc for `IntWrapper` and `LongWrapper`
Loading