Skip to content
Snippets Groups Projects
Commit 857ecff1 authored by Xiangrui Meng's avatar Xiangrui Meng
Browse files

[SPARK-16155][DOC] remove package grouping in Java docs

## What changes were proposed in this pull request?

In 1.4 and earlier releases, we have package grouping in the generated Java API docs. See http://spark.apache.org/docs/1.4.0/api/java/index.html. However, this disappeared in 1.5.0: http://spark.apache.org/docs/1.5.0/api/java/index.html.

Rather than fixing it, I'd suggest removing grouping. Because it might take some time to fix and it is a manual process to update the grouping in `SparkBuild.scala`. I didn't find anyone complaining about missing groups since 1.5.0 on Google.

Manually checked the generated Java API docs and confirmed that they are the same as in master.

Author: Xiangrui Meng <meng@databricks.com>

Closes #13856 from mengxr/SPARK-16155.
parent 00cc5cca
No related branches found
No related tags found
No related merge requests found
...@@ -684,11 +684,6 @@ object Unidoc { ...@@ -684,11 +684,6 @@ object Unidoc {
import sbtunidoc.Plugin._ import sbtunidoc.Plugin._
import UnidocKeys._ import UnidocKeys._
// for easier specification of JavaDoc package groups
private def packageList(names: String*): String = {
names.map(s => "org.apache.spark." + s).mkString(":")
}
private def ignoreUndocumentedPackages(packages: Seq[Seq[File]]): Seq[Seq[File]] = { private def ignoreUndocumentedPackages(packages: Seq[Seq[File]]): Seq[Seq[File]] = {
packages packages
.map(_.filterNot(_.getName.contains("$"))) .map(_.filterNot(_.getName.contains("$")))
...@@ -731,21 +726,6 @@ object Unidoc { ...@@ -731,21 +726,6 @@ object Unidoc {
javacOptions in doc := Seq( javacOptions in doc := Seq(
"-windowtitle", "Spark " + version.value.replaceAll("-SNAPSHOT", "") + " JavaDoc", "-windowtitle", "Spark " + version.value.replaceAll("-SNAPSHOT", "") + " JavaDoc",
"-public", "-public",
"-group", "Core Java API", packageList("api.java", "api.java.function"),
"-group", "Spark Streaming", packageList(
"streaming.api.java", "streaming.flume", "streaming.kafka", "streaming.kinesis"
),
"-group", "MLlib", packageList(
"mllib.classification", "mllib.clustering", "mllib.evaluation.binary", "mllib.linalg",
"mllib.linalg.distributed", "mllib.optimization", "mllib.rdd", "mllib.recommendation",
"mllib.regression", "mllib.stat", "mllib.tree", "mllib.tree.configuration",
"mllib.tree.impurity", "mllib.tree.model", "mllib.util",
"mllib.evaluation", "mllib.feature", "mllib.random", "mllib.stat.correlation",
"mllib.stat.test", "mllib.tree.impl", "mllib.tree.loss",
"ml", "ml.attribute", "ml.classification", "ml.clustering", "ml.evaluation", "ml.feature",
"ml.param", "ml.recommendation", "ml.regression", "ml.tuning"
),
"-group", "Spark SQL", packageList("sql.api.java", "sql.api.java.types", "sql.hive.api.java"),
"-noqualifier", "java.lang" "-noqualifier", "java.lang"
), ),
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment