Skip to content
Snippets Groups Projects
Commit c9e05a31 authored by Holden Karau's avatar Holden Karau Committed by DB Tsai
Browse files

[SPARK-8613] [ML] [TRIVIAL] add param to disable linear feature scaling

Add a param to disable linear feature scaling (to be implemented later in linear & logistic regression). Done as a seperate PR so we can use same param & not conflict while working on the sub-tasks.

Author: Holden Karau <holden@pigscanfly.ca>

Closes #7024 from holdenk/SPARK-8522-Disable-Linear_featureScaling-Spark-8613-Add-param and squashes the following commits:

ce8931a [Holden Karau] Regenerate the sharedParams code
fa6427e [Holden Karau] update text for standardization param.
7b24a2b [Holden Karau] generate the new standardization param
3c190af [Holden Karau] Add the standardization param to sharedparamscodegen
parent 9fed6abf
No related branches found
No related tags found
No related merge requests found
......@@ -53,6 +53,9 @@ private[shared] object SharedParamsCodeGen {
ParamDesc[Int]("checkpointInterval", "checkpoint interval (>= 1)",
isValid = "ParamValidators.gtEq(1)"),
ParamDesc[Boolean]("fitIntercept", "whether to fit an intercept term", Some("true")),
ParamDesc[Boolean]("standardization", "whether to standardize the training features" +
" prior to fitting the model sequence. Note that the coefficients of models are" +
" always returned on the original scale.", Some("true")),
ParamDesc[Long]("seed", "random seed", Some("this.getClass.getName.hashCode.toLong")),
ParamDesc[Double]("elasticNetParam", "the ElasticNet mixing parameter, in range [0, 1]." +
" For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty.",
......
......@@ -233,6 +233,23 @@ private[ml] trait HasFitIntercept extends Params {
final def getFitIntercept: Boolean = $(fitIntercept)
}
/**
* (private[ml]) Trait for shared param standardization (default: true).
*/
private[ml] trait HasStandardization extends Params {
/**
* Param for whether to standardize the training features prior to fitting the model sequence. Note that the coefficients of models are always returned on the original scale..
* @group param
*/
final val standardization: BooleanParam = new BooleanParam(this, "standardization", "whether to standardize the training features prior to fitting the model sequence. Note that the coefficients of models are always returned on the original scale.")
setDefault(standardization, true)
/** @group getParam */
final def getStandardization: Boolean = $(standardization)
}
/**
* (private[ml]) Trait for shared param seed (default: this.getClass.getName.hashCode.toLong).
*/
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment