Skip to content
Snippets Groups Projects
  • Bryan Cutler's avatar
    e3f8a033
    [SPARK-16403][EXAMPLES] Cleanup to remove unused imports, consistent style, minor fixes · e3f8a033
    Bryan Cutler authored
    ## What changes were proposed in this pull request?
    
    Cleanup of examples, mostly from PySpark-ML to fix minor issues:  unused imports, style consistency, pipeline_example is a duplicate, use future print funciton, and a spelling error.
    
    * The "Pipeline Example" is duplicated by "Simple Text Classification Pipeline" in Scala, Python, and Java.
    
    * "Estimator Transformer Param Example" is duplicated by "Simple Params Example" in Scala, Python and Java
    
    * Synced random_forest_classifier_example.py with Scala by adding IndexToString label converted
    
    * Synced train_validation_split.py (in Scala ModelSelectionViaTrainValidationExample) by adjusting data split, adding grid for intercept.
    
    * RegexTokenizer was doing nothing in tokenizer_example.py and JavaTokenizerExample.java, synced with Scala version
    
    ## How was this patch tested?
    local tests and run modified examples
    
    Author: Bryan Cutler <cutlerb@gmail.com>
    
    Closes #14081 from BryanCutler/examples-cleanup-SPARK-16403.
    e3f8a033
    History
    [SPARK-16403][EXAMPLES] Cleanup to remove unused imports, consistent style, minor fixes
    Bryan Cutler authored
    ## What changes were proposed in this pull request?
    
    Cleanup of examples, mostly from PySpark-ML to fix minor issues:  unused imports, style consistency, pipeline_example is a duplicate, use future print funciton, and a spelling error.
    
    * The "Pipeline Example" is duplicated by "Simple Text Classification Pipeline" in Scala, Python, and Java.
    
    * "Estimator Transformer Param Example" is duplicated by "Simple Params Example" in Scala, Python and Java
    
    * Synced random_forest_classifier_example.py with Scala by adding IndexToString label converted
    
    * Synced train_validation_split.py (in Scala ModelSelectionViaTrainValidationExample) by adjusting data split, adding grid for intercept.
    
    * RegexTokenizer was doing nothing in tokenizer_example.py and JavaTokenizerExample.java, synced with Scala version
    
    ## How was this patch tested?
    local tests and run modified examples
    
    Author: Bryan Cutler <cutlerb@gmail.com>
    
    Closes #14081 from BryanCutler/examples-cleanup-SPARK-16403.
string_indexer_example.py 1.37 KiB
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

from __future__ import print_function

# $example on$
from pyspark.ml.feature import StringIndexer
# $example off$
from pyspark.sql import SparkSession

if __name__ == "__main__":
    spark = SparkSession\
        .builder\
        .appName("StringIndexerExample")\
        .getOrCreate()

    # $example on$
    df = spark.createDataFrame(
        [(0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c")],
        ["id", "category"])

    indexer = StringIndexer(inputCol="category", outputCol="categoryIndex")
    indexed = indexer.fit(df).transform(df)
    indexed.show()
    # $example off$

    spark.stop()