From 7afa912e747c77ebfd10bddf7bda2e3190fdeb9c Mon Sep 17 00:00:00 2001 From: Anatoli Fomenko <fa@apache.org> Date: Mon, 16 Jun 2014 23:10:36 -0700 Subject: [PATCH] MLlib documentation fix Synchronized mllib-optimization.md with Spark Scaladoc: removed reference to GradientDescent.runMiniBatchSGD method This is a temporary fix to remove a link from http://spark.apache.org/docs/latest/mllib-optimization.html to GradientDescent.runMiniBatchSGD which is not in the current online GradientDescent Scaladoc. FIXME: revert this commit after GradientDescent Scaladoc is updated. See images for details.   Author: Anatoli Fomenko <fa@apache.org> Closes #1098 from afomenko/master and squashes the following commits: 5cb0758 [Anatoli Fomenko] MLlib documentation fix --- docs/mllib-optimization.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/mllib-optimization.md b/docs/mllib-optimization.md index 97e8f4e966..ae9ede58e8 100644 --- a/docs/mllib-optimization.md +++ b/docs/mllib-optimization.md @@ -147,9 +147,9 @@ are developed, see the <a href="mllib-linear-methods.html">linear methods</a> section for example. -The SGD method -[GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) -has the following parameters: +The SGD class +[GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) +sets the following parameters: * `Gradient` is a class that computes the stochastic gradient of the function being optimized, i.e., with respect to a single training example, at the @@ -171,7 +171,7 @@ each iteration, to compute the gradient direction. Available algorithms for gradient descent: -* [GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) +* [GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent) ### L-BFGS L-BFGS is currently only a low-level optimization primitive in `MLlib`. If you want to use L-BFGS in various -- GitLab