Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
spark
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
cs525-sp18-g07
spark
Repository graph
Repository graph
You can move around the graph by using the arrow keys.
c6b8eb71a9638c9a8ce02d11d5fe26f4c5be531e
Select Git revision
Branches
4
baesline-eviction-with-logging
evict-by-size
master
default
protected
working
Tags
20
v2.3.0
v2.3.0-rc4
v2.3.0-rc3
v2.3.0-rc2
v2.3.0-rc1
v2.2.1
v2.2.1-rc2
v2.2.1-rc1
v2.1.2
v2.1.2-rc4
v2.1.2-rc3
v2.1.2-rc2
v2.1.2-rc1
v2.2.0
v2.1.1
v2.1.0
v2.0.2
v1.6.3
v2.0.1
v2.0.0
24 results
Begin with the selected commit
Created with Raphaël 2.2.0
17
Jan
18
17
16
15
14
13
12
13
12
11
10
9
8
7
8
7
6
5
4
3
2
1
31
Dec
30
29
28
29
28
27
28
27
26
25
24
23
22
21
22
21
22
21
20
19
20
19
18
17
16
15
14
15
14
13
14
13
12
11
10
9
10
9
8
9
8
7
6
7
6
7
6
7
6
5
6
5
6
5
4
3
2
1
2
1
30
Nov
29
30
29
28
[SPARK-18917][SQL] Remove schema check in appending data
[MINOR][SQL] Remove duplicate call of reset() function in CurrentOrigin.withOrigin()
[SPARK-19239][PYSPARK] Check parameters whether equals None when specify the column in jdbc API
[SPARK-19129][SQL] SessionCatalog: Disallow empty part col values in partition spec
[SPARK-19129][SQL] SessionCatalog: Disallow empty part col values in partition spec
[SPARK-19065][SQL] Don't inherit expression id in dropDuplicates
[SPARK-19065][SQL] Don't inherit expression id in dropDuplicates
[SPARK-19019] [PYTHON] Fix hijacked `collections.namedtuple` and port cloudpickle changes for PySpark to work with Python 3.6.0
[SPARK-19019] [PYTHON] Fix hijacked `collections.namedtuple` and port cloudpickle changes for PySpark to work with Python 3.6.0
[SPARK-19179][YARN] Change spark.yarn.access.namenodes config and update docs
[SPARK-3249][DOC] Fix links in ScalaDoc that cause warning messages in `sbt/sbt unidoc`
[SPARK-19219][SQL] Fix Parquet log output defaults
[SPARK-19240][SQL][TEST] add test for setting location for managed table
[MINOR][YARN] Move YarnSchedulerBackendSuite to resource-managers/yarn directory.
[SPARK-19148][SQL] do not expose the external table concept in Catalog
[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset from JobScheduler.jobSets
[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset from JobScheduler.jobSets
[SPARK-18828][SPARKR] Refactor scripts for R
[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows
[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows
[SPARK-19066][SPARKR] SparkR LDA doesn't set optimizer correctly
[SPARK-18801][SQL][FOLLOWUP] Alias the view with its child
[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet
[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet
[SPARK-19092][SQL][BACKPORT-2.1] Save() API of DataFrameWriter should not scan all the saved files #16481
[SPARK-19120] Refresh Metadata Cache After Loading Hive Tables
[SPARK-19120] Refresh Metadata Cache After Loading Hive Tables
[SPARK-19206][DOC][DSTREAM] Fix outdated parameter descriptions in kafka010
[SPARK-18971][CORE] Upgrade Netty to 4.0.43.Final
[MINOR][DOC] Document local[*,F] master modes
[SPARK-19042] spark executor can't download the jars when uber jar's http url contains any query strings
[SPARK-19207][SQL] LocalSparkSession should use Slf4JLoggerFactory.INSTANCE
[SPARK-19151][SQL] DataFrameWriter.saveAsTable support hive overwrite
[SPARK-19221][PROJECT INFRA][R] Add winutils binaries to the path in AppVeyor tests for Hadoop libraries to call native codes properly
[SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn
[SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn
[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter
[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter
[SPARK-19178][SQL] convert string of large numbers to int should return null
[SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error
Loading