Skip to content
Snippets Groups Projects
sql-programming-guide.md 101 KiB
Newer Older
  • Learn to ignore specific revisions
  • displayTitle: Spark SQL, DataFrames and Datasets Guide
    
    title: Spark SQL and DataFrames
    
    ---
    
    * This will become a table of contents (this text will be scraped).
    {:toc}
    
    # Overview
    
    Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided
    by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. Internally,
    Spark SQL uses this extra information to perform extra optimizations. There are several ways to
    
    interact with Spark SQL including SQL and the Dataset API. When computing a result
    
    the same execution engine is used, independent of which API/language you are using to express the
    
    computation. This unification means that developers can easily switch back and forth between
    different APIs based on which provides the most natural way to express a given transformation.
    
    All of the examples on this page use sample data included in the Spark distribution and can be run in
    the `spark-shell`, `pyspark` shell, or `sparkR` shell.
    
    One use of Spark SQL is to execute SQL queries.
    
    Spark SQL can also be used to read data from an existing Hive installation. For more on how to
    configure this feature, please refer to the [Hive Tables](#hive-tables) section. When running
    
    SQL from within another programming language the results will be returned as a [Dataset/DataFrame](#datasets-and-dataframes).
    
    You can also interact with the SQL interface using the [command-line](#running-the-spark-sql-cli)
    or over [JDBC/ODBC](#running-the-thrift-jdbcodbc-server).
    
    A Dataset is a distributed collection of data.
    Dataset is a new interface added in Spark 1.6 that provides the benefits of RDDs (strong
    
    typing, ability to use powerful lambda functions) with the benefits of Spark SQL's optimized
    execution engine. A Dataset can be [constructed](#creating-datasets) from JVM objects and then
    manipulated using functional transformations (`map`, `flatMap`, `filter`, etc.).
    
    The Dataset API is available in [Scala][scala-datasets] and
    [Java][java-datasets]. Python does not have the support for the Dataset API. But due to Python's dynamic nature,
    many of the benefits of the Dataset API are already available (i.e. you can access the field of a row by name naturally
    `row.columnName`). The case for R is similar.
    
    A DataFrame is a *Dataset* organized into named columns. It is conceptually
    equivalent to a table in a relational database or a data frame in R/Python, but with richer
    optimizations under the hood. DataFrames can be constructed from a wide array of [sources](#data-sources) such
    as: structured data files, tables in Hive, external databases, or existing RDDs.
    The DataFrame API is available in Scala,
    Java, [Python](api/python/pyspark.sql.html#pyspark.sql.DataFrame), and [R](api/R/index.html).
    In Scala and Java, a DataFrame is represented by a Dataset of `Row`s.
    In [the Scala API][scala-datasets], `DataFrame` is simply a type alias of `Dataset[Row]`.
    While, in [Java API][java-datasets], users need to use `Dataset<Row>` to represent a `DataFrame`.
    
    [scala-datasets]: api/scala/index.html#org.apache.spark.sql.Dataset
    [java-datasets]: api/java/index.html?org/apache/spark/sql/Dataset.html
    
    Throughout this document, we will often refer to Scala/Java Datasets of `Row`s as DataFrames.
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    
    
    The entry point into all functionality in Spark is the [`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder()`:
    
    {% include_example init_session scala/org/apache/spark/examples/sql/RDDRelation.scala %}
    
    <div data-lang="java" markdown="1">
    
    The entry point into all functionality in Spark is the [`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder()`:
    
    {% include_example init_session java/org/apache/spark/examples/sql/JavaSparkSQL.java %}
    
    <div data-lang="python"  markdown="1">
    
    
    The entry point into all functionality in Spark is the [`SparkSession`](api/python/pyspark.sql.html#pyspark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder`:
    
    {% include_example init_session python/sql.py %}
    
    The entry point into all functionality in Spark is the [`SparkSession`](api/R/sparkR.session.html) class. To initialize a basic `SparkSession`, just call `sparkR.session()`:
    
    {% include_example init_session r/RSparkSQLExample.R %}
    
    Note that when invoked for the first time, `sparkR.session()` initializes a global `SparkSession` singleton instance, and always returns a reference to this instance for successive invocations. In this way, users only need to initialize the `SparkSession` once, then SparkR functions like `read.df` will be able to access this global instance implicitly, and users don't need to pass the `SparkSession` instance around.
    
    `SparkSession` in Spark 2.0 provides builtin support for Hive features including the ability to
    
    write queries using HiveQL, access to Hive UDFs, and the ability to read data from Hive tables.
    To use these features, you do not need to have an existing Hive setup.
    
    ## Creating DataFrames
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    
    With a `SparkSession`, applications can create DataFrames from an [existing `RDD`](#interoperating-with-rdds),
    from a Hive table, or from [Spark data sources](#data-sources).
    
    As an example, the following creates a DataFrame based on the content of a JSON file:
    
    {% highlight scala %}
    val spark: SparkSession // An existing SparkSession.
    val df = spark.read.json("examples/src/main/resources/people.json")
    
    
    // Displays the content of the DataFrame to stdout
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="java" markdown="1">
    
    With a `SparkSession`, applications can create DataFrames from an [existing `RDD`](#interoperating-with-rdds),
    from a Hive table, or from [Spark data sources](#data-sources).
    
    As an example, the following creates a DataFrame based on the content of a JSON file:
    
    {% highlight java %}
    SparkSession spark = ...; // An existing SparkSession.
    Dataset<Row> df = spark.read().json("examples/src/main/resources/people.json");
    
    
    // Displays the content of the DataFrame to stdout
    df.show();
    {% endhighlight %}
    
    </div>
    
    <div data-lang="python"  markdown="1">
    
    With a `SparkSession`, applications can create DataFrames from an [existing `RDD`](#interoperating-with-rdds),
    from a Hive table, or from [Spark data sources](#data-sources).
    
    As an example, the following creates a DataFrame based on the content of a JSON file:
    
    {% highlight python %}
    # spark is an existing SparkSession
    df = spark.read.json("examples/src/main/resources/people.json")
    
    
    # Displays the content of the DataFrame to stdout
    df.show()
    {% endhighlight %}
    
    </div>
    
    With a `SparkSession`, applications can create DataFrames from a local R data.frame,
    
    from a Hive table, or from [Spark data sources](#data-sources).
    
    As an example, the following creates a DataFrame based on the content of a JSON file:
    
    {% include_example create_DataFrames r/RSparkSQLExample.R %}
    
    ## Untyped Dataset Operations (aka DataFrame Operations)
    
    DataFrames provide a domain-specific language for structured data manipulation in [Scala](api/scala/index.html#org.apache.spark.sql.Dataset), [Java](api/java/index.html?org/apache/spark/sql/Dataset.html), [Python](api/python/pyspark.sql.html#pyspark.sql.DataFrame) and [R](api/R/DataFrame.html).
    
    As mentioned above, in Spark 2.0, DataFrames are just Dataset of `Row`s in Scala and Java API. These operations are also referred as "untyped transformations" in contrast to "typed transformations" come with strongly typed Scala/Java Datasets.
    
    Here we include some basic examples of structured data processing using Datasets:
    
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    {% highlight scala %}
    
    val spark: SparkSession // An existing SparkSession
    
    val df = spark.read.json("examples/src/main/resources/people.json")
    
    
    // Show the content of the DataFrame
    df.show()
    
    
    // Print the schema in a tree format
    df.printSchema()
    // root
    // |-- age: long (nullable = true)
    // |-- name: string (nullable = true)
    
    // Select only the "name" column
    df.select("name").show()
    
    
    // Select everybody, but increment the age by 1
    
    df.select(df("name"), df("age") + 1).show()
    
    df.filter(df("age") > 21).show()
    
    // age name
    // 30  Andy
    
    // Count people by age
    df.groupBy("age").count().show()
    // age  count
    // null 1
    // 19   1
    // 30   1
    {% endhighlight %}
    
    
    For a complete list of the types of operations that can be performed on a Dataset refer to the [API Documentation](api/scala/index.html#org.apache.spark.sql.Dataset).
    
    In addition to simple column references and expressions, Datasets also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the [DataFrame Function Reference](api/scala/index.html#org.apache.spark.sql.functions$).
    
    </div>
    
    <div data-lang="java" markdown="1">
    {% highlight java %}
    
    SparkSession spark = ...; // An existing SparkSession
    
    Dataset<Row> df = spark.read().json("examples/src/main/resources/people.json");
    
    
    // Show the content of the DataFrame
    df.show();
    
    
    // Print the schema in a tree format
    df.printSchema();
    // root
    // |-- age: long (nullable = true)
    // |-- name: string (nullable = true)
    
    // Select only the "name" column
    df.select("name").show();
    
    
    // Select everybody, but increment the age by 1
    
    df.select(df.col("name"), df.col("age").plus(1)).show();
    
    df.filter(df.col("age").gt(21)).show();
    
    // age name
    // 30  Andy
    
    // Count people by age
    df.groupBy("age").count().show();
    // age  count
    // null 1
    // 19   1
    // 30   1
    {% endhighlight %}
    
    
    For a complete list of the types of operations that can be performed on a Dataset refer to the [API Documentation](api/java/org/apache/spark/sql/Dataset.html).
    
    In addition to simple column references and expressions, Datasets also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the [DataFrame Function Reference](api/java/org/apache/spark/sql/functions.html).
    
    </div>
    
    <div data-lang="python"  markdown="1">
    
    In Python it's possible to access a DataFrame's columns either by attribute
    (`df.age`) or by indexing (`df['age']`). While the former is convenient for
    interactive data exploration, users are highly encouraged to use the
    latter form, which is future proof and won't break with column names that
    are also attributes on the DataFrame class.
    
    
    # spark is an existing SparkSession
    
    df = spark.read.json("examples/src/main/resources/people.json")
    
    
    # Show the content of the DataFrame
    df.show()
    
    
    # Print the schema in a tree format
    df.printSchema()
    ## root
    ## |-- age: long (nullable = true)
    ## |-- name: string (nullable = true)
    
    # Select only the "name" column
    df.select("name").show()
    
    
    # Select everybody, but increment the age by 1
    
    ## age name
    ## 30  Andy
    
    # Count people by age
    df.groupBy("age").count().show()
    ## age  count
    ## null 1
    ## 19   1
    ## 30   1
    
    {% endhighlight %}
    
    
    For a complete list of the types of operations that can be performed on a DataFrame refer to the [API Documentation](api/python/pyspark.sql.html#pyspark.sql.DataFrame).
    
    
    In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the [DataFrame Function Reference](api/python/pyspark.sql.html#module-pyspark.sql.functions).
    
    {% include_example dataframe_operations r/RSparkSQLExample.R %}
    
    For a complete list of the types of operations that can be performed on a DataFrame refer to the [API Documentation](api/R/index.html).
    
    
    In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the [DataFrame Function Reference](api/R/SparkDataFrame.html).
    
    
    ## Running SQL Queries Programmatically
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    
    The `sql` function on a `SparkSession` enables applications to run SQL queries programmatically and returns the result as a `DataFrame`.
    
    
    val spark = ... // An existing SparkSession
    val df = spark.sql("SELECT * FROM table")
    
    {% endhighlight %}
    </div>
    
    <div data-lang="java" markdown="1">
    
    The `sql` function on a `SparkSession` enables applications to run SQL queries programmatically and returns the result as a `Dataset<Row>`.
    
    
    SparkSession spark = ... // An existing SparkSession
    Dataset<Row> df = spark.sql("SELECT * FROM table")
    
    {% endhighlight %}
    </div>
    
    <div data-lang="python"  markdown="1">
    
    The `sql` function on a `SparkSession` enables applications to run SQL queries programmatically and returns the result as a `DataFrame`.
    
    
    # spark is an existing SparkSession
    df = spark.sql("SELECT * FROM table")
    
    The `sql` function enables applications to run SQL queries programmatically and returns the result as a `SparkDataFrame`.
    
    {% include_example sql_query r/RSparkSQLExample.R %}
    
    Datasets are similar to RDDs, however, instead of using Java serialization or Kryo they use
    
    a specialized [Encoder](api/scala/index.html#org.apache.spark.sql.Encoder) to serialize the objects
    for processing or transmitting over the network. While both encoders and standard serialization are
    responsible for turning an object into bytes, encoders are code generated dynamically and use a format
    that allows Spark to perform many operations like filtering, sorting and hashing without deserializing
    the bytes back into an object.
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    
    {% highlight scala %}
    
    // Encoders for most common types are automatically provided by importing spark.implicits._
    
    val ds = Seq(1, 2, 3).toDS()
    ds.map(_ + 1).collect() // Returns: Array(2, 3, 4)
    
    // Encoders are also created for case classes.
    case class Person(name: String, age: Long)
    val ds = Seq(Person("Andy", 32)).toDS()
    
    // DataFrames can be converted to a Dataset by providing a class. Mapping will be done by name.
    val path = "examples/src/main/resources/people.json"
    
    val people = spark.read.json(path).as[Person]
    
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="java" markdown="1">
    
    {% highlight java %}
    
    SparkSession spark = ... // An existing SparkSession
    
    // Encoders for most common types are provided in class Encoders.
    Dataset<Integer> ds = spark.createDataset(Arrays.asList(1, 2, 3), Encoders.INT());
    ds.map(new MapFunction<Integer, Integer>() {
      @Override
      public Integer call(Integer value) throws Exception {
        return value + 1;
      }
    }, Encoders.INT()); // Returns: [2, 3, 4]
    
    Person person = new Person();
    person.setName("Andy");
    person.setAge(32);
    
    // Encoders are also created for Java beans.
    Dataset<Person> ds = spark.createDataset(
      Collections.singletonList(person),
      Encoders.bean(Person.class)
    );
    
    // DataFrames can be converted to a Dataset by providing a class. Mapping will be done by name.
    String path = "examples/src/main/resources/people.json";
    Dataset<Person> people = spark.read().json(path).as(Encoders.bean(Person.class));
    
    Spark SQL supports two different methods for converting existing RDDs into Datasets. The first
    
    method uses reflection to infer the schema of an RDD that contains specific types of objects. This
    
    reflection based approach leads to more concise code and works well when you already know the schema
    
    while writing your Spark application.
    
    The second method for creating Datasets is through a programmatic interface that allows you to
    
    construct a schema and then apply it to an existing RDD. While this method is more verbose, it allows
    
    you to construct Datasets when the columns and their types are not known until runtime.
    
    ### Inferring the Schema Using Reflection
    
    <div class="codetabs">
    
    <div data-lang="scala"  markdown="1">
    
    
    The Scala interface for Spark SQL supports automatically converting an RDD containing case classes
    
    to a DataFrame. The case class
    defines the schema of the table. The names of the arguments to the case class are read using
    
    reflection and become the names of the columns. Case classes can also be nested or contain complex
    
    types such as `Seq`s or `Array`s. This RDD can be implicitly converted to a DataFrame and then be
    
    registered as a table. Tables can be used in subsequent SQL statements.
    
    val spark: SparkSession // An existing SparkSession
    
    // this is used to implicitly convert an RDD to a DataFrame.
    
    
    // Define the schema using a case class.
    
    // Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit,
    
    // you can use custom classes that implement the Product interface.
    
    case class Person(name: String, age: Int)
    
    
    // Create an RDD of Person objects and register it as a temporary view.
    val people = sc
      .textFile("examples/src/main/resources/people.txt")
      .map(_.split(","))
      .map(p => Person(p(0), p(1).trim.toInt))
      .toDF()
    
    // SQL statements can be run by using the sql methods provided by spark.
    val teenagers = spark.sql("SELECT name, age FROM people WHERE age >= 13 AND age <= 19")
    
    // The columns of a row in the result can be accessed by field index:
    
    teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
    
    
    // or by field name:
    teenagers.map(t => "Name: " + t.getAs[String]("name")).collect().foreach(println)
    
    // row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T]
    teenagers.map(_.getValuesMap[Any](List("name", "age"))).collect().foreach(println)
    // Map("name" -> "Justin", "age" -> 19)
    
    </div>
    
    <div data-lang="java"  markdown="1">
    
    
    Spark SQL supports automatically converting an RDD of
    [JavaBeans](http://stackoverflow.com/questions/3295496/what-is-a-javabean-exactly) into a DataFrame.
    The `BeanInfo`, obtained using reflection, defines the schema of the table. Currently, Spark SQL
    does not support JavaBeans that contain `Map` field(s). Nested JavaBeans and `List` or `Array`
    fields are supported though. You can create a JavaBean by creating a class that implements
    Serializable and has getters and setters for all of its fields.
    
    
    {% highlight java %}
    
    public static class Person implements Serializable {
      private String name;
      private int age;
    
    
      public String getName() {
    
      public void setName(String name) {
    
      public int getAge() {
    
      public void setAge(int age) {
    
    A schema can be applied to an existing RDD by calling `createDataFrame` and providing the Class object
    
    for the JavaBean.
    
    {% highlight java %}
    
    SparkSession spark = ...; // An existing SparkSession
    
    
    // Load a text file and convert each line to a JavaBean.
    
    JavaRDD<Person> people = spark.sparkContext.textFile("examples/src/main/resources/people.txt").map(
    
      new Function<String, Person>() {
        public Person call(String line) throws Exception {
          String[] parts = line.split(",");
    
          Person person = new Person();
          person.setName(parts[0]);
          person.setAge(Integer.parseInt(parts[1].trim()));
    
          return person;
        }
      });
    
    // Apply a schema to an RDD of JavaBeans and register it as a table.
    
    Dataset<Row> schemaPeople = spark.createDataFrame(people, Person.class);
    
    schemaPeople.createOrReplaceTempView("people");
    
    
    // SQL can be run over RDDs that have been registered as tables.
    
    Dataset<Row> teenagers = spark.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
    
    
    // The columns of a row in the result can be accessed by ordinal.
    
    List<String> teenagerNames = teenagers.map(new MapFunction<Row, String>() {
    
      public String call(Row row) {
        return "Name: " + row.getString(0);
      }
    
    <div data-lang="python"  markdown="1">
    
    
    Spark SQL can convert an RDD of Row objects to a DataFrame, inferring the datatypes. Rows are constructed by passing a list of
    
    key/value pairs as kwargs to the Row class. The keys of this list define the column names of the table,
    
    and the types are inferred by sampling the whole datase, similar to the inference that is performed on JSON files.
    
    
    {% highlight python %}
    
    # spark is an existing SparkSession.
    from pyspark.sql import Row
    sc = spark.sparkContext
    
    # Load a text file and convert each line to a Row.
    
    lines = sc.textFile("examples/src/main/resources/people.txt")
    parts = lines.map(lambda l: l.split(","))
    
    people = parts.map(lambda p: Row(name=p[0], age=int(p[1])))
    
    # Infer the schema, and register the DataFrame as a table.
    
    schemaPeople = spark.createDataFrame(people)
    
    schemaPeople.createOrReplaceTempView("people")
    
    # SQL can be run over DataFrames that have been registered as a table.
    
    teenagers = spark.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
    
    
    # The results of SQL queries are RDDs and support all the normal RDD operations.
    teenNames = teenagers.map(lambda p: "Name: " + p.name)
    
    for teenName in teenNames.collect():
    
    {% endhighlight %}
    
    </div>
    
    
    ### Programmatically Specifying the Schema
    
    <div class="codetabs">
    
    <div data-lang="scala"  markdown="1">
    
    
    When case classes cannot be defined ahead of time (for example,
    the structure of records is encoded in a string, or a text dataset will be parsed
    
    and fields will be projected differently for different users),
    
    a `DataFrame` can be created programmatically with three steps.
    
    
    1. Create an RDD of `Row`s from the original RDD;
    2. Create the schema represented by a `StructType` matching the structure of
    
    `Row`s in the RDD created in Step 1.
    
    3. Apply the schema to the RDD of `Row`s via `createDataFrame` method provided
    
    
    For example:
    {% highlight scala %}
    
    val spark: SparkSession // An existing SparkSession
    
    
    // Create an RDD
    val people = sc.textFile("examples/src/main/resources/people.txt")
    
    // The schema is encoded in a string
    val schemaString = "name age"
    
    
    // Import Row.
    import org.apache.spark.sql.Row;
    
    // Import Spark SQL data types
    
    import org.apache.spark.sql.types.{StructType, StructField, StringType};
    
    
    // Generate the schema based on the string of schema
    
    val schema = StructType(schemaString.split(" ").map { fieldName =>
      StructField(fieldName, StringType, true)
    })
    
    
    // Convert records of the RDD (people) to Rows.
    val rowRDD = people.map(_.split(",")).map(p => Row(p(0), p(1).trim))
    
    // Apply the schema to the RDD.
    
    val peopleDataFrame = spark.createDataFrame(rowRDD, schema)
    
    // Creates a temporary view using the DataFrame.
    peopleDataFrame.createOrReplaceTempView("people")
    
    // SQL statements can be run by using the sql methods provided by spark.
    val results = spark.sql("SELECT name FROM people")
    
    // The columns of a row in the result can be accessed by field index or by field name.
    
    results.map(t => "Name: " + t(0)).collect().foreach(println)
    {% endhighlight %}
    
    
    </div>
    
    <div data-lang="java"  markdown="1">
    
    
    When JavaBean classes cannot be defined ahead of time (for example,
    the structure of records is encoded in a string, or a text dataset will be parsed and
    
    fields will be projected differently for different users),
    
    a `Dataset<Row>` can be created programmatically with three steps.
    
    
    1. Create an RDD of `Row`s from the original RDD;
    2. Create the schema represented by a `StructType` matching the structure of
    
    `Row`s in the RDD created in Step 1.
    
    3. Apply the schema to the RDD of `Row`s via `createDataFrame` method provided
    
    
    For example:
    {% highlight java %}
    
    import org.apache.spark.api.java.function.Function;
    // Import factory methods provided by DataTypes.
    import org.apache.spark.sql.types.DataTypes;
    
    // Import StructType and StructField
    
    import org.apache.spark.sql.types.StructType;
    import org.apache.spark.sql.types.StructField;
    
    // Import Row.
    
    import org.apache.spark.sql.Row;
    
    // Import RowFactory.
    import org.apache.spark.sql.RowFactory;
    
    SparkSession spark = ...; // An existing SparkSession.
    JavaSparkContext sc = spark.sparkContext
    
    
    // Load a text file and convert each line to a JavaBean.
    JavaRDD<String> people = sc.textFile("examples/src/main/resources/people.txt");
    
    // The schema is encoded in a string
    String schemaString = "name age";
    
    // Generate the schema based on the string of schema
    
    for (String fieldName: schemaString.split(" ")) {
    
      fields.add(DataTypes.createStructField(fieldName, DataTypes.StringType, true));
    
    StructType schema = DataTypes.createStructType(fields);
    
    
    // Convert records of the RDD (people) to Rows.
    JavaRDD<Row> rowRDD = people.map(
      new Function<String, Row>() {
        public Row call(String record) throws Exception {
          String[] fields = record.split(",");
    
          return RowFactory.create(fields[0], fields[1].trim());
    
        }
      });
    
    // Apply the schema to the RDD.
    
    Dataset<Row> peopleDataFrame = spark.createDataFrame(rowRDD, schema);
    
    // Creates a temporary view using the DataFrame.
    peopleDataFrame.createOrReplaceTempView("people");
    
    // SQL can be run over a temporary view created using DataFrames.
    
    Dataset<Row> results = spark.sql("SELECT name FROM people");
    
    // The results of SQL queries are DataFrames and support all the normal RDD operations.
    
    // The columns of a row in the result can be accessed by ordinal.
    
    List<String> names = results.javaRDD().map(new Function<Row, String>() {
    
      public String call(Row row) {
        return "Name: " + row.getString(0);
      }
    }).collect();
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="python"  markdown="1">
    
    
    When a dictionary of kwargs cannot be defined ahead of time (for example,
    the structure of records is encoded in a string, or a text dataset will be parsed and
    fields will be projected differently for different users),
    
    a `DataFrame` can be created programmatically with three steps.
    
    
    1. Create an RDD of tuples or lists from the original RDD;
    2. Create the schema represented by a `StructType` matching the structure of
    tuples or lists in the RDD created in the step 1.
    
    3. Apply the schema to the RDD via `createDataFrame` method provided by `SparkSession`.
    
    
    For example:
    {% highlight python %}
    
    # Import SparkSession and data types
    
    from pyspark.sql.types import *
    
    # spark is an existing SparkSession.
    sc = spark.sparkContext
    
    
    # Load a text file and convert each line to a tuple.
    lines = sc.textFile("examples/src/main/resources/people.txt")
    parts = lines.map(lambda l: l.split(","))
    people = parts.map(lambda p: (p[0], p[1].strip()))
    
    # The schema is encoded in a string.
    schemaString = "name age"
    
    fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
    schema = StructType(fields)
    
    # Apply the schema to the RDD.
    
    schemaPeople = spark.createDataFrame(people, schema)
    
    # Creates a temporary view using the DataFrame
    schemaPeople.createOrReplaceTempView("people")
    
    # SQL can be run over DataFrames that have been registered as a table.
    
    results = spark.sql("SELECT name FROM people")
    
    
    # The results of SQL queries are RDDs and support all the normal RDD operations.
    names = results.map(lambda p: "Name: " + p.name)
    for name in names.collect():
    
    {% endhighlight %}
    
    </div>
    
    </div>
    
    Spark SQL supports operating on a variety of data sources through the DataFrame interface.
    A DataFrame can be operated on using relational transformations and can also be used to create a temporary view.
    Registering a DataFrame as a temporary view allows you to run SQL queries over its data. This section
    
    describes the general methods for loading and saving data using the Spark Data Sources and then
    goes into specific options that are available for the built-in data sources.
    
    ## Generic Load/Save Functions
    
    In the simplest form, the default data source (`parquet` unless otherwise configured by
    `spark.sql.sources.default`) will be used for all operations.
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    
    {% highlight scala %}
    
    val df = spark.read.load("examples/src/main/resources/users.parquet")
    
    df.select("name", "favorite_color").write.save("namesAndFavColors.parquet")
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="java"  markdown="1">
    
    {% highlight java %}
    
    
    Dataset<Row> df = spark.read().load("examples/src/main/resources/users.parquet");
    
    df.select("name", "favorite_color").write().save("namesAndFavColors.parquet");
    
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="python"  markdown="1">
    
    {% highlight python %}
    
    
    df = spark.read.load("examples/src/main/resources/users.parquet")
    
    df.select("name", "favorite_color").write.save("namesAndFavColors.parquet")
    
    {% include_example source_parquet r/RSparkSQLExample.R %}
    
    </div>
    </div>
    
    ### Manually Specifying Options
    
    You can also manually specify the data source that will be used along with any extra options
    
    that you would like to pass to the data source. Data sources are specified by their fully qualified
    
    name (i.e., `org.apache.spark.sql.parquet`), but for built-in sources you can also use their short
    
    names (`json`, `parquet`, `jdbc`). DataFrames loaded from any data source type can be converted into other types
    
    using this syntax.
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    
    {% highlight scala %}
    
    val df = spark.read.format("json").load("examples/src/main/resources/people.json")
    
    df.select("name", "age").write.format("parquet").save("namesAndAges.parquet")
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="java"  markdown="1">
    
    {% highlight java %}
    
    
    Dataset<Row> df = spark.read().format("json").load("examples/src/main/resources/people.json");
    
    df.select("name", "age").write().format("parquet").save("namesAndAges.parquet");
    
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="python"  markdown="1">
    
    {% highlight python %}
    
    
    df = spark.read.load("examples/src/main/resources/people.json", format="json")
    
    df.select("name", "age").write.save("namesAndAges.parquet", format="parquet")
    
    {% include_example source_json r/RSparkSQLExample.R %}
    
    ### Run SQL on files directly
    
    Instead of using read API to load a file into DataFrame and query it, you can also query that
    file directly with SQL.
    
    <div class="codetabs">
    <div data-lang="scala"  markdown="1">
    
    {% highlight scala %}
    
    val df = spark.sql("SELECT * FROM parquet.`examples/src/main/resources/users.parquet`")
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="java"  markdown="1">
    
    {% highlight java %}
    
    Dataset<Row> df = spark.sql("SELECT * FROM parquet.`examples/src/main/resources/users.parquet`");
    
    {% endhighlight %}
    </div>
    
    <div data-lang="python"  markdown="1">
    
    {% highlight python %}
    
    df = spark.sql("SELECT * FROM parquet.`examples/src/main/resources/users.parquet`")
    
    {% endhighlight %}
    
    </div>
    
    <div data-lang="r"  markdown="1">
    
    
    {% include_example direct_query r/RSparkSQLExample.R %}
    
    ### Save Modes
    
    Save operations can optionally take a `SaveMode`, that specifies how to handle existing data if
    
    present. It is important to realize that these save modes do not utilize any locking and are not
    
    atomic. Additionally, when performing an `Overwrite`, the data will be deleted before writing out the
    
    <tr><th>Scala/Java</th><th>Any Language</th><th>Meaning</th></tr>
    
    <tr>
      <td><code>SaveMode.ErrorIfExists</code> (default)</td>
      <td><code>"error"</code> (default)</td>
      <td>
        When saving a DataFrame to a data source, if data already exists,
        an exception is expected to be thrown.
      </td>
    </tr>
    <tr>
      <td><code>SaveMode.Append</code></td>
      <td><code>"append"</code></td>
      <td>
        When saving a DataFrame to a data source, if data/table already exists,
        contents of the DataFrame are expected to be appended to existing data.
      </td>
    </tr>
    <tr>
      <td><code>SaveMode.Overwrite</code></td>
      <td><code>"overwrite"</code></td>
      <td>
        Overwrite mode means that when saving a DataFrame to a data source,
        if data/table already exists, existing data is expected to be overwritten by the contents of
        the DataFrame.
      </td>
    </tr>
    <tr>
      <td><code>SaveMode.Ignore</code></td>
      <td><code>"ignore"</code></td>
      <td>
        Ignore mode means that when saving a DataFrame to a data source, if data already exists,
        the save operation is expected to not save the contents of the DataFrame and to not
    
        change the existing data. This is similar to a <code>CREATE TABLE IF NOT EXISTS</code> in SQL.
    
      </td>
    </tr>
    </table>
    
    ### Saving to Persistent Tables
    
    
    `DataFrames` can also be saved as persistent tables into Hive metastore using the `saveAsTable`
    command. Notice existing Hive deployment is not necessary to use this feature. Spark will create a
    default local Hive metastore (using Derby) for you. Unlike the `createOrReplaceTempView` command,
    `saveAsTable` will materialize the contents of the DataFrame and create a pointer to the data in the
    Hive metastore. Persistent tables will still exist even after your Spark program has restarted, as
    long as you maintain your connection to the same metastore. A DataFrame for a persistent table can
    be created by calling the `table` method on a `SparkSession` with the name of the table.
    
    
    By default `saveAsTable` will create a "managed table", meaning that the location of the data will
    
    be controlled by the metastore. Managed tables will also have their data deleted automatically
    
    [Parquet](http://parquet.io) is a columnar format that is supported by many other data processing systems.
    Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema
    
    of the original data. When writing Parquet files, all columns are automatically converted to be nullable for
    
    
    ### Loading Data Programmatically
    
    Using the data from the above example:
    
    <div class="codetabs">
    
    <div data-lang="scala"  markdown="1">