Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
spark
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
cs525-sp18-g07
spark
Repository graph
Repository graph
You can move around the graph by using the arrow keys.
c68fb426d4ac05414fb402aa1f30f4c98df103ad
Select Git revision
Branches
4
baesline-eviction-with-logging
evict-by-size
master
default
protected
working
Tags
20
v2.3.0
v2.3.0-rc4
v2.3.0-rc3
v2.3.0-rc2
v2.3.0-rc1
v2.2.1
v2.2.1-rc2
v2.2.1-rc1
v2.1.2
v2.1.2-rc4
v2.1.2-rc3
v2.1.2-rc2
v2.1.2-rc1
v2.2.0
v2.1.1
v2.1.0
v2.0.2
v1.6.3
v2.0.1
v2.0.0
24 results
Begin with the selected commit
Created with Raphaël 2.2.0
16
Jan
15
14
13
12
13
12
11
10
9
8
7
8
7
6
5
4
3
2
1
31
Dec
30
29
28
29
28
27
28
27
26
25
24
23
22
21
22
21
22
21
20
19
20
19
18
17
16
15
14
15
14
13
14
13
12
11
10
9
10
9
8
9
8
7
6
7
6
7
6
7
6
5
6
5
6
5
4
3
2
1
2
1
30
Nov
29
30
29
28
29
28
[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset from JobScheduler.jobSets
[SPARK-18828][SPARKR] Refactor scripts for R
[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows
[SPARK-19232][SPARKR] Update Spark distribution download cache location on Windows
[SPARK-19066][SPARKR] SparkR LDA doesn't set optimizer correctly
[SPARK-18801][SQL][FOLLOWUP] Alias the view with its child
[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet
[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet
[SPARK-19092][SQL][BACKPORT-2.1] Save() API of DataFrameWriter should not scan all the saved files #16481
[SPARK-19120] Refresh Metadata Cache After Loading Hive Tables
[SPARK-19120] Refresh Metadata Cache After Loading Hive Tables
[SPARK-19206][DOC][DSTREAM] Fix outdated parameter descriptions in kafka010
[SPARK-18971][CORE] Upgrade Netty to 4.0.43.Final
[MINOR][DOC] Document local[*,F] master modes
[SPARK-19042] spark executor can't download the jars when uber jar's http url contains any query strings
[SPARK-19207][SQL] LocalSparkSession should use Slf4JLoggerFactory.INSTANCE
[SPARK-19151][SQL] DataFrameWriter.saveAsTable support hive overwrite
[SPARK-19221][PROJECT INFRA][R] Add winutils binaries to the path in AppVeyor tests for Hadoop libraries to call native codes properly
[SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn
[SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn
[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter
[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter
[SPARK-19178][SQL] convert string of large numbers to int should return null
[SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error
[SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error
Fix missing close-parens for In filter's toString
Fix missing close-parens for In filter's toString
[SPARK-19178][SQL] convert string of large numbers to int should return null
[SPARK-19142][SPARKR] spark.kmeans should take seed, initSteps, and tol as parameters
[SPARK-19092][SQL] Save() API of DataFrameWriter should not scan all the saved files
[SPARK-19110][MLLIB][FOLLOWUP] Add a unit test for testing logPrior and logLikelihood of DistributedLDAModel in MLLIB
[SPARK-17237][SQL] Remove backticks in a pivot result schema
[SPARK-17237][SQL] Remove backticks in a pivot result schema
[SPARK-12757][CORE] lower "block locks were not released" log to info level
[SPARK-19055][SQL][PYSPARK] Fix SparkSession initialization when SparkContext is stopped
[SPARK-19055][SQL][PYSPARK] Fix SparkSession initialization when SparkContext is stopped
[SPARK-18969][SQL] Support grouping by nondeterministic expressions
[SPARK-18969][SQL] Support grouping by nondeterministic expressions
[SPARK-18857][SQL] Don't use `Iterator.duplicate` for `incrementalCollect` in Thrift Server
[SPARK-19183][SQL] Add deleteWithJob hook to internal commit protocol API
Loading