-
Sean Owen authored
[SPARK-9570] [DOCS] Consistent recommendation for submitting spark apps to YARN, -master yarn --deploy-mode x vs -master yarn-x'. Recommend `--master yarn --deploy-mode {cluster,client}` consistently in docs. Follow-on to https://github.com/apache/spark/pull/8385 CC nssalian Author: Sean Owen <sowen@cloudera.com> Closes #8968 from srowen/SPARK-9570.
Sean Owen authored[SPARK-9570] [DOCS] Consistent recommendation for submitting spark apps to YARN, -master yarn --deploy-mode x vs -master yarn-x'. Recommend `--master yarn --deploy-mode {cluster,client}` consistently in docs. Follow-on to https://github.com/apache/spark/pull/8385 CC nssalian Author: Sean Owen <sowen@cloudera.com> Closes #8968 from srowen/SPARK-9570.
layout: global
displayTitle: Spark SQL and DataFrame Guide
title: Spark SQL and DataFrames
- This will become a table of contents (this text will be scraped). {:toc}
Overview
Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as distributed SQL query engine.
Spark SQL can also be used to read data from an existing Hive installation. For more on how to configure this feature, please refer to the Hive Tables section.
DataFrames
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.
The DataFrame API is available in Scala, Java, Python, and R.
All of the examples on this page use sample data included in the Spark distribution and can be run in the spark-shell
, pyspark
shell, or sparkR
shell.
Starting Point: SQLContext
The entry point into all functionality in Spark SQL is the
SQLContext
class, or one of its
descendants. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight scala %} val sc: SparkContext // An existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// this is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._ {% endhighlight %}
The entry point into all functionality in Spark SQL is the
SQLContext
class, or one of its
descendants. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight java %} JavaSparkContext sc = ...; // An existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc); {% endhighlight %}
The entry point into all relational functionality in Spark is the
SQLContext
class, or one
of its decedents. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight python %} from pyspark.sql import SQLContext sqlContext = SQLContext(sc) {% endhighlight %}
The entry point into all relational functionality in Spark is the
SQLContext
class, or one of its decedents. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight r %} sqlContext <- sparkRSQL.init(sc) {% endhighlight %}
In addition to the basic SQLContext
, you can also create a HiveContext
, which provides a
superset of the functionality provided by the basic SQLContext
. Additional features include
the ability to write queries using the more complete HiveQL parser, access to Hive UDFs, and the
ability to read data from Hive tables. To use a HiveContext
, you do not need to have an
existing Hive setup, and all of the data sources available to a SQLContext
are still available.
HiveContext
is only packaged separately to avoid including all of Hive's dependencies in the default
Spark build. If these dependencies are not a problem for your application then using HiveContext
is recommended for the 1.3 release of Spark. Future releases will focus on bringing SQLContext
up
to feature parity with a HiveContext
.
The specific variant of SQL that is used to parse queries can also be selected using the
spark.sql.dialect
option. This parameter can be changed using either the setConf
method on
a SQLContext
or by using a SET key=value
command in SQL. For a SQLContext
, the only dialect
available is "sql" which uses a simple SQL parser provided by Spark SQL. In a HiveContext
, the
default is "hiveql", though "sql" is also available. Since the HiveQL parser is much more complete,
this is recommended for most use cases.
Creating DataFrames
With a SQLContext
, applications can create DataFrame
s from an existing RDD
, from a Hive table, or from data sources.
As an example, the following creates a DataFrame
based on the content of a JSON file:
val df = sqlContext.read.json("examples/src/main/resources/people.json")
// Displays the content of the DataFrame to stdout df.show() {% endhighlight %}
DataFrame df = sqlContext.read().json("examples/src/main/resources/people.json");
// Displays the content of the DataFrame to stdout df.show(); {% endhighlight %}
df = sqlContext.read.json("examples/src/main/resources/people.json")
Displays the content of the DataFrame to stdout
df.show() {% endhighlight %}
df <- jsonFile(sqlContext, "examples/src/main/resources/people.json")
Displays the content of the DataFrame to stdout
showDF(df) {% endhighlight %}
DataFrame Operations
DataFrames provide a domain-specific language for structured data manipulation in Scala, Java, and Python.
Here we include some basic examples of structured data processing using DataFrames:
// Create the DataFrame val df = sqlContext.read.json("examples/src/main/resources/people.json")
// Show the content of the DataFrame df.show() // age name // null Michael // 30 Andy // 19 Justin
// Print the schema in a tree format df.printSchema() // root // |-- age: long (nullable = true) // |-- name: string (nullable = true)
// Select only the "name" column df.select("name").show() // name // Michael // Andy // Justin
// Select everybody, but increment the age by 1 df.select(df("name"), df("age") + 1).show() // name (age + 1) // Michael null // Andy 31 // Justin 20
// Select people older than 21 df.filter(df("age") > 21).show() // age name // 30 Andy
// Count people by age df.groupBy("age").count().show() // age count // null 1 // 19 1 // 30 1 {% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
// Create the DataFrame DataFrame df = sqlContext.read().json("examples/src/main/resources/people.json");
// Show the content of the DataFrame df.show(); // age name // null Michael // 30 Andy // 19 Justin
// Print the schema in a tree format df.printSchema(); // root // |-- age: long (nullable = true) // |-- name: string (nullable = true)
// Select only the "name" column df.select("name").show(); // name // Michael // Andy // Justin
// Select everybody, but increment the age by 1 df.select(df.col("name"), df.col("age").plus(1)).show(); // name (age + 1) // Michael null // Andy 31 // Justin 20
// Select people older than 21 df.filter(df.col("age").gt(21)).show(); // age name // 30 Andy
// Count people by age df.groupBy("age").count().show(); // age count // null 1 // 19 1 // 30 1 {% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
{% highlight python %} from pyspark.sql import SQLContext sqlContext = SQLContext(sc)
Create the DataFrame
df = sqlContext.read.json("examples/src/main/resources/people.json")
Show the content of the DataFrame
df.show()
age name
null Michael
30 Andy
19 Justin
Print the schema in a tree format
df.printSchema()
root
|-- age: long (nullable = true)
|-- name: string (nullable = true)
Select only the "name" column
df.select("name").show()
name
Michael
Andy
Justin
Select everybody, but increment the age by 1
df.select(df['name'], df['age'] + 1).show()
name (age + 1)
Michael null
Andy 31
Justin 20
Select people older than 21
df.filter(df['age'] > 21).show()
age name
30 Andy
Count people by age
df.groupBy("age").count().show()
age count
null 1
19 1
30 1
{% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
Create the DataFrame
df <- jsonFile(sqlContext, "examples/src/main/resources/people.json")
Show the content of the DataFrame
showDF(df)
age name
null Michael
30 Andy
19 Justin
Print the schema in a tree format
printSchema(df)
root
|-- age: long (nullable = true)
|-- name: string (nullable = true)
Select only the "name" column
showDF(select(df, "name"))
name
Michael
Andy
Justin
Select everybody, but increment the age by 1
showDF(select(df, dfname, dfage + 1))
name (age + 1)
Michael null
Andy 31
Justin 20
Select people older than 21
showDF(where(df, df$age > 21))
age name
30 Andy
Count people by age
showDF(count(groupBy(df, "age")))
age count
null 1
19 1
30 1
{% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
Running SQL Queries Programmatically
The sql
function on a SQLContext
enables applications to run SQL queries programmatically and returns the result as a DataFrame
.
Interoperating with RDDs
Spark SQL supports two different methods for converting existing RDDs into DataFrames. The first method uses reflection to infer the schema of an RDD that contains specific types of objects. This reflection based approach leads to more concise code and works well when you already know the schema while writing your Spark application.
The second method for creating DataFrames is through a programmatic interface that allows you to construct a schema and then apply it to an existing RDD. While this method is more verbose, it allows you to construct DataFrames when the columns and their types are not known until runtime.
Inferring the Schema Using Reflection
The Scala interface for Spark SQL supports automatically converting an RDD containing case classes to a DataFrame. The case class defines the schema of the table. The names of the arguments to the case class are read using reflection and become the names of the columns. Case classes can also be nested or contain complex types such as Sequences or Arrays. This RDD can be implicitly converted to a DataFrame and then be registered as a table. Tables can be used in subsequent SQL statements.
{% highlight scala %} // sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc) // this is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._
// Define the schema using a case class. // Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit, // you can use custom classes that implement the Product interface. case class Person(name: String, age: Int)
// Create an RDD of Person objects and register it as a table. val people = sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)).toDF() people.registerTempTable("people")
// SQL statements can be run by using the sql methods provided by sqlContext. val teenagers = sqlContext.sql("SELECT name, age FROM people WHERE age >= 13 AND age <= 19")
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by field index: teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
// or by field name: teenagers.map(t => "Name: " + t.getAsString).collect().foreach(println)
// row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T] teenagers.map(_.getValuesMap[Any](List("name", "age"))).collect().foreach(println) // Map("name" -> "Justin", "age" -> 19) {% endhighlight %}
Spark SQL supports automatically converting an RDD of JavaBeans into a DataFrame. The BeanInfo, obtained using reflection, defines the schema of the table. Currently, Spark SQL does not support JavaBeans that contain nested or contain complex types such as Lists or Arrays. You can create a JavaBean by creating a class that implements Serializable and has getters and setters for all of its fields.
{% highlight java %}
public static class Person implements Serializable { private String name; private int age;
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public int getAge() { return age; }
public void setAge(int age) { this.age = age; } }
{% endhighlight %}
A schema can be applied to an existing RDD by calling createDataFrame
and providing the Class object
for the JavaBean.
{% highlight java %} // sc is an existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
// Load a text file and convert each line to a JavaBean. JavaRDD people = sc.textFile("examples/src/main/resources/people.txt").map( new Function<String, Person>() { public Person call(String line) throws Exception { String[] parts = line.split(",");
Person person = new Person();
person.setName(parts[0]);
person.setAge(Integer.parseInt(parts[1].trim()));
return person;
}
});
// Apply a schema to an RDD of JavaBeans and register it as a table. DataFrame schemaPeople = sqlContext.createDataFrame(people, Person.class); schemaPeople.registerTempTable("people");
// SQL can be run over RDDs that have been registered as tables. DataFrame teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. List teenagerNames = teenagers.javaRDD().map(new Function<Row, String>() { public String call(Row row) { return "Name: " + row.getString(0); } }).collect();
{% endhighlight %}
Spark SQL can convert an RDD of Row objects to a DataFrame, inferring the datatypes. Rows are constructed by passing a list of key/value pairs as kwargs to the Row class. The keys of this list define the column names of the table, and the types are inferred by looking at the first row. Since we currently only look at the first row, it is important that there is no missing data in the first row of the RDD. In future versions we plan to more completely infer the schema by looking at more data, similar to the inference that is performed on JSON files.
{% highlight python %}
sc is an existing SparkContext.
from pyspark.sql import SQLContext, Row sqlContext = SQLContext(sc)
Load a text file and convert each line to a Row.
lines = sc.textFile("examples/src/main/resources/people.txt") parts = lines.map(lambda l: l.split(",")) people = parts.map(lambda p: Row(name=p[0], age=int(p[1])))
Infer the schema, and register the DataFrame as a table.
schemaPeople = sqlContext.createDataFrame(people) schemaPeople.registerTempTable("people")
SQL can be run over DataFrames that have been registered as a table.
teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
The results of SQL queries are RDDs and support all the normal RDD operations.
teenNames = teenagers.map(lambda p: "Name: " + p.name) for teenName in teenNames.collect(): print(teenName) {% endhighlight %}
Programmatically Specifying the Schema
When case classes cannot be defined ahead of time (for example,
the structure of records is encoded in a string, or a text dataset will be parsed
and fields will be projected differently for different users),
a DataFrame
can be created programmatically with three steps.
- Create an RDD of
Row
s from the original RDD; - Create the schema represented by a
StructType
matching the structure ofRow
s in the RDD created in Step 1. - Apply the schema to the RDD of
Row
s viacreateDataFrame
method provided bySQLContext
.
For example: {% highlight scala %} // sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// Create an RDD val people = sc.textFile("examples/src/main/resources/people.txt")
// The schema is encoded in a string val schemaString = "name age"
// Import Row. import org.apache.spark.sql.Row;
// Import Spark SQL data types import org.apache.spark.sql.types.{StructType,StructField,StringType};
// Generate the schema based on the string of schema val schema = StructType( schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))
// Convert records of the RDD (people) to Rows. val rowRDD = people.map(_.split(",")).map(p => Row(p(0), p(1).trim))
// Apply the schema to the RDD. val peopleDataFrame = sqlContext.createDataFrame(rowRDD, schema)
// Register the DataFrames as a table. peopleDataFrame.registerTempTable("people")
// SQL statements can be run by using the sql methods provided by sqlContext. val results = sqlContext.sql("SELECT name FROM people")
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by field index or by field name. results.map(t => "Name: " + t(0)).collect().foreach(println) {% endhighlight %}
When JavaBean classes cannot be defined ahead of time (for example,
the structure of records is encoded in a string, or a text dataset will be parsed and
fields will be projected differently for different users),
a DataFrame
can be created programmatically with three steps.
- Create an RDD of
Row
s from the original RDD; - Create the schema represented by a
StructType
matching the structure ofRow
s in the RDD created in Step 1. - Apply the schema to the RDD of
Row
s viacreateDataFrame
method provided bySQLContext
.
For example: {% highlight java %} import org.apache.spark.api.java.function.Function; // Import factory methods provided by DataTypes. import org.apache.spark.sql.types.DataTypes; // Import StructType and StructField import org.apache.spark.sql.types.StructType; import org.apache.spark.sql.types.StructField; // Import Row. import org.apache.spark.sql.Row; // Import RowFactory. import org.apache.spark.sql.RowFactory;
// sc is an existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
// Load a text file and convert each line to a JavaBean. JavaRDD people = sc.textFile("examples/src/main/resources/people.txt");
// The schema is encoded in a string String schemaString = "name age";
// Generate the schema based on the string of schema List fields = new ArrayList(); for (String fieldName: schemaString.split(" ")) { fields.add(DataTypes.createStructField(fieldName, DataTypes.StringType, true)); } StructType schema = DataTypes.createStructType(fields);
// Convert records of the RDD (people) to Rows. JavaRDD rowRDD = people.map( new Function<String, Row>() { public Row call(String record) throws Exception { String[] fields = record.split(","); return RowFactory.create(fields[0], fields[1].trim()); } });
// Apply the schema to the RDD. DataFrame peopleDataFrame = sqlContext.createDataFrame(rowRDD, schema);
// Register the DataFrame as a table. peopleDataFrame.registerTempTable("people");
// SQL can be run over RDDs that have been registered as tables. DataFrame results = sqlContext.sql("SELECT name FROM people");
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. List names = results.javaRDD().map(new Function<Row, String>() { public String call(Row row) { return "Name: " + row.getString(0); } }).collect();
{% endhighlight %}
When a dictionary of kwargs cannot be defined ahead of time (for example,
the structure of records is encoded in a string, or a text dataset will be parsed and
fields will be projected differently for different users),
a DataFrame
can be created programmatically with three steps.
- Create an RDD of tuples or lists from the original RDD;
- Create the schema represented by a
StructType
matching the structure of tuples or lists in the RDD created in the step 1. - Apply the schema to the RDD via
createDataFrame
method provided bySQLContext
.
For example: {% highlight python %}
Import SQLContext and data types
from pyspark.sql import SQLContext from pyspark.sql.types import *
sc is an existing SparkContext.
sqlContext = SQLContext(sc)
Load a text file and convert each line to a tuple.
lines = sc.textFile("examples/src/main/resources/people.txt") parts = lines.map(lambda l: l.split(",")) people = parts.map(lambda p: (p[0], p[1].strip()))
The schema is encoded in a string.
schemaString = "name age"
fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()] schema = StructType(fields)
Apply the schema to the RDD.
schemaPeople = sqlContext.createDataFrame(people, schema)
Register the DataFrame as a table.
schemaPeople.registerTempTable("people")
SQL can be run over DataFrames that have been registered as a table.
results = sqlContext.sql("SELECT name FROM people")
The results of SQL queries are RDDs and support all the normal RDD operations.
names = results.map(lambda p: "Name: " + p.name) for name in names.collect(): print(name) {% endhighlight %}
Data Sources
Spark SQL supports operating on a variety of data sources through the DataFrame
interface.
A DataFrame can be operated on as normal RDDs and can also be registered as a temporary table.
Registering a DataFrame as a table allows you to run SQL queries over its data. This section
describes the general methods for loading and saving data using the Spark Data Sources and then
goes into specific options that are available for the built-in data sources.
Generic Load/Save Functions
In the simplest form, the default data source (parquet
unless otherwise configured by
spark.sql.sources.default
) will be used for all operations.
{% highlight scala %} val df = sqlContext.read.load("examples/src/main/resources/users.parquet") df.select("name", "favorite_color").write.save("namesAndFavColors.parquet") {% endhighlight %}
{% highlight java %}
DataFrame df = sqlContext.read().load("examples/src/main/resources/users.parquet"); df.select("name", "favorite_color").write().save("namesAndFavColors.parquet");
{% endhighlight %}
{% highlight python %}
df = sqlContext.read.load("examples/src/main/resources/users.parquet") df.select("name", "favorite_color").write.save("namesAndFavColors.parquet")
{% endhighlight %}
{% highlight r %} df <- loadDF(sqlContext, "people.parquet") saveDF(select(df, "name", "age"), "namesAndAges.parquet") {% endhighlight %}
Manually Specifying Options
You can also manually specify the data source that will be used along with any extra options
that you would like to pass to the data source. Data sources are specified by their fully qualified
name (i.e., org.apache.spark.sql.parquet
), but for built-in sources you can also use their short
names (json
, parquet
, jdbc
). DataFrames of any type can be converted into other types
using this syntax.
{% highlight scala %} val df = sqlContext.read.format("json").load("examples/src/main/resources/people.json") df.select("name", "age").write.format("parquet").save("namesAndAges.parquet") {% endhighlight %}
{% highlight java %}
DataFrame df = sqlContext.read().format("json").load("examples/src/main/resources/people.json"); df.select("name", "age").write().format("parquet").save("namesAndAges.parquet");
{% endhighlight %}
{% highlight python %}
df = sqlContext.read.load("examples/src/main/resources/people.json", format="json") df.select("name", "age").write.save("namesAndAges.parquet", format="parquet")
{% endhighlight %}
{% highlight r %}
df <- loadDF(sqlContext, "people.json", "json") saveDF(select(df, "name", "age"), "namesAndAges.parquet", "parquet")
{% endhighlight %}
Save Modes
Save operations can optionally take a SaveMode
, that specifies how to handle existing data if
present. It is important to realize that these save modes do not utilize any locking and are not
atomic. Additionally, when performing a Overwrite
, the data will be deleted before writing out the
new data.
Scala/Java | Any Language | Meaning |
---|---|---|
SaveMode.ErrorIfExists (default) |
"error" (default) |
When saving a DataFrame to a data source, if data already exists, an exception is expected to be thrown. |
SaveMode.Append |
"append" |
When saving a DataFrame to a data source, if data/table already exists, contents of the DataFrame are expected to be appended to existing data. |
SaveMode.Overwrite |
"overwrite" |
Overwrite mode means that when saving a DataFrame to a data source, if data/table already exists, existing data is expected to be overwritten by the contents of the DataFrame. |
SaveMode.Ignore |
"ignore" |
Ignore mode means that when saving a DataFrame to a data source, if data already exists,
the save operation is expected to not save the contents of the DataFrame and to not
change the existing data. This is similar to a CREATE TABLE IF NOT EXISTS in SQL.
|
Saving to Persistent Tables
When working with a HiveContext
, DataFrames
can also be saved as persistent tables using the
saveAsTable
command. Unlike the registerTempTable
command, saveAsTable
will materialize the
contents of the dataframe and create a pointer to the data in the HiveMetastore. Persistent tables
will still exist even after your Spark program has restarted, as long as you maintain your connection
to the same metastore. A DataFrame for a persistent table can be created by calling the table
method on a SQLContext
with the name of the table.
By default saveAsTable
will create a "managed table", meaning that the location of the data will
be controlled by the metastore. Managed tables will also have their data deleted automatically
when a table is dropped.
Parquet Files
Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data.
Loading Data Programmatically
Using the data from the above example:
{% highlight scala %} // sqlContext from the previous example is used in this example. // This is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._
val people: RDD[Person] = ... // An RDD of case class objects, from the previous example.
// The RDD is implicitly converted to a DataFrame by implicits, allowing it to be stored using Parquet. people.write.parquet("people.parquet")
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved. // The result of loading a Parquet file is also a DataFrame. val parquetFile = sqlContext.read.parquet("people.parquet")
//Parquet files can also be registered as tables and then used in SQL statements. parquetFile.registerTempTable("parquetFile") val teenagers = sqlContext.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19") teenagers.map(t => "Name: " + t(0)).collect().foreach(println) {% endhighlight %}
{% highlight java %} // sqlContext from the previous example is used in this example.
DataFrame schemaPeople = ... // The DataFrame from the previous example.
// DataFrames can be saved as Parquet files, maintaining the schema information. schemaPeople.write().parquet("people.parquet");
// Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved. // The result of loading a parquet file is also a DataFrame. DataFrame parquetFile = sqlContext.read().parquet("people.parquet");
// Parquet files can also be registered as tables and then used in SQL statements. parquetFile.registerTempTable("parquetFile"); DataFrame teenagers = sqlContext.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19"); List teenagerNames = teenagers.javaRDD().map(new Function<Row, String>() { public String call(Row row) { return "Name: " + row.getString(0); } }).collect(); {% endhighlight %}
{% highlight python %}
sqlContext from the previous example is used in this example.
schemaPeople # The DataFrame from the previous example.
DataFrames can be saved as Parquet files, maintaining the schema information.
schemaPeople.write.parquet("people.parquet")
Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved.
The result of loading a parquet file is also a DataFrame.
parquetFile = sqlContext.read.parquet("people.parquet")
Parquet files can also be registered as tables and then used in SQL statements.
parquetFile.registerTempTable("parquetFile"); teenagers = sqlContext.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19") teenNames = teenagers.map(lambda p: "Name: " + p.name) for teenName in teenNames.collect(): print(teenName) {% endhighlight %}
{% highlight r %}
sqlContext from the previous example is used in this example.
schemaPeople # The DataFrame from the previous example.
DataFrames can be saved as Parquet files, maintaining the schema information.
saveAsParquetFile(schemaPeople, "people.parquet")
Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved.
The result of loading a parquet file is also a DataFrame.
parquetFile <- parquetFile(sqlContext, "people.parquet")
Parquet files can also be registered as tables and then used in SQL statements.
registerTempTable(parquetFile, "parquetFile"); teenagers <- sql(sqlContext, "SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19") teenNames <- map(teenagers, function(p) { paste("Name:", p$name)}) for (teenName in collect(teenNames)) { cat(teenName, "\n") } {% endhighlight %}
{% highlight python %}
sqlContext is an existing HiveContext
sqlContext.sql("REFRESH TABLE my_table") {% endhighlight %}
{% highlight sql %}
CREATE TEMPORARY TABLE parquetTable USING org.apache.spark.sql.parquet OPTIONS ( path "examples/src/main/resources/people.parquet" )
SELECT * FROM parquetTable
{% endhighlight %}