diff --git a/docs/mllib-optimization.md b/docs/mllib-optimization.md
index 97e8f4e9661b6acf9c5861fdd0113774f18658fc..ae9ede58e8e6033343d84db3557b309d1a61ebb5 100644
--- a/docs/mllib-optimization.md
+++ b/docs/mllib-optimization.md
@@ -147,9 +147,9 @@ are developed, see the
 <a href="mllib-linear-methods.html">linear methods</a> 
 section for example.
 
-The SGD method
-[GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
-has the following parameters:
+The SGD class
+[GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
+sets the following parameters:
 
 * `Gradient` is a class that computes the stochastic gradient of the function
 being optimized, i.e., with respect to a single training example, at the
@@ -171,7 +171,7 @@ each iteration, to compute the gradient direction.
 
 Available algorithms for gradient descent:
 
-* [GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
+* [GradientDescent](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
 
 ### L-BFGS
 L-BFGS is currently only a low-level optimization primitive in `MLlib`. If you want to use L-BFGS in various