Skip to content
Snippets Groups Projects
  • Dongjoon Hyun's avatar
    cdce4e62
    [SPARK-15031][EXAMPLE] Use SparkSession in Scala/Python/Java example. · cdce4e62
    Dongjoon Hyun authored
    ## What changes were proposed in this pull request?
    
    This PR aims to update Scala/Python/Java examples by replacing `SQLContext` with newly added `SparkSession`.
    
    - Use **SparkSession Builder Pattern** in 154(Scala 55, Java 52, Python 47) files.
    - Add `getConf` in Python SparkContext class: `python/pyspark/context.py`
    - Replace **SQLContext Singleton Pattern** with **SparkSession Singleton Pattern**:
      - `SqlNetworkWordCount.scala`
      - `JavaSqlNetworkWordCount.java`
      - `sql_network_wordcount.py`
    
    Now, `SQLContexts` are used only in R examples and the following two Python examples. The python examples are untouched in this PR since it already fails some unknown issue.
    - `simple_params_example.py`
    - `aft_survival_regression.py`
    
    ## How was this patch tested?
    
    Manual.
    
    Author: Dongjoon Hyun <dongjoon@apache.org>
    
    Closes #12809 from dongjoon-hyun/SPARK-15031.
    cdce4e62
    History
    [SPARK-15031][EXAMPLE] Use SparkSession in Scala/Python/Java example.
    Dongjoon Hyun authored
    ## What changes were proposed in this pull request?
    
    This PR aims to update Scala/Python/Java examples by replacing `SQLContext` with newly added `SparkSession`.
    
    - Use **SparkSession Builder Pattern** in 154(Scala 55, Java 52, Python 47) files.
    - Add `getConf` in Python SparkContext class: `python/pyspark/context.py`
    - Replace **SQLContext Singleton Pattern** with **SparkSession Singleton Pattern**:
      - `SqlNetworkWordCount.scala`
      - `JavaSqlNetworkWordCount.java`
      - `sql_network_wordcount.py`
    
    Now, `SQLContexts` are used only in R examples and the following two Python examples. The python examples are untouched in this PR since it already fails some unknown issue.
    - `simple_params_example.py`
    - `aft_survival_regression.py`
    
    ## How was this patch tested?
    
    Manual.
    
    Author: Dongjoon Hyun <dongjoon@apache.org>
    
    Closes #12809 from dongjoon-hyun/SPARK-15031.
binary_classification_metrics_example.py 2.07 KiB
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Binary Classification Metrics Example.
"""
from __future__ import print_function
from pyspark import SparkContext
# $example on$
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.evaluation import BinaryClassificationMetrics
from pyspark.mllib.util import MLUtils
# $example off$

if __name__ == "__main__":
    sc = SparkContext(appName="BinaryClassificationMetricsExample")

    # $example on$
    # Several of the methods available in scala are currently missing from pyspark
    # Load training data in LIBSVM format
    data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_binary_classification_data.txt")

    # Split data into training (60%) and test (40%)
    training, test = data.randomSplit([0.6, 0.4], seed=11L)
    training.cache()

    # Run training algorithm to build the model
    model = LogisticRegressionWithLBFGS.train(training)

    # Compute raw scores on the test set
    predictionAndLabels = test.map(lambda lp: (float(model.predict(lp.features)), lp.label))

    # Instantiate metrics object
    metrics = BinaryClassificationMetrics(predictionAndLabels)

    # Area under precision-recall curve
    print("Area under PR = %s" % metrics.areaUnderPR)

    # Area under ROC curve
    print("Area under ROC = %s" % metrics.areaUnderROC)
    # $example off$

    sc.stop()