diff --git a/doc/getting_started.rst b/doc/getting_started.rst
index aefdf71b0cdbfbd2c761041bde96e4e79ee97585..cc8b7cb989dc33c698d3b8fc6ef62a5cc2ef00aa 100644
--- a/doc/getting_started.rst
+++ b/doc/getting_started.rst
@@ -125,15 +125,17 @@ We will be using the term QoS throughout the tutorials.
 :py:meth:`tuner.tune <predtuner.modeledapp.ApproxModeledTuner.tune>`
 is the main method for running a tuning session.
 It accepts a few parameters which controls the behavior of tuning.
-`max_iter` defines the number of iterations to use in autotuning.
-Within 1000 iterations, PredTuner should find about 200 valid configurations.
-PredTuner will also automatically mark out `Pareto-optimal
-<https://en.wikipedia.org/wiki/Pareto_efficiency>`_
-configurations.
-These are called "best" configurations (`tuner.best_configs`),
-in contrast to "valid" configurations which are the configurations that satisfy our accuracy requirements
-(`tuner.kept_configs`).
-`take_best_n` allows taking some extra close-optimal configurations in addition to Pareto-optimal ones.
+
+* `qos_keep_threshold` decides the QoS threshold above which the found configuration is kept.
+  These are called the "kept" configurations and are accessible from `tuner.kept_configs`.
+
+* `max_iter` defines the number of iterations to use in autotuning.
+  Within 1000 iterations, PredTuner should be about to find about 200 "kept" configurations.
+
+* PredTuner will also automatically mark out
+  `Pareto-optimal <https://en.wikipedia.org/wiki/Pareto_efficiency>`_ configurations.
+  These are called "best" configurations (`tuner.best_configs`)
+  `take_best_n` allows taking some extra close-optimal configurations in addition to Pareto-optimal ones.
 
 1000 iterations is for demonstration; in practice,
 at least 10000 iterations are necessary on VGG16-sized models to converge to a set of good configurations.