-
Dongjoon Hyun authored
## What changes were proposed in this pull request? This PR tries to fix all typos in all markdown files under `docs` module, and fixes similar typos in other comments, too. ## How was the this patch tested? manual tests. Author: Dongjoon Hyun <dongjoon@apache.org> Closes #11300 from dongjoon-hyun/minor_fix_typos.
Dongjoon Hyun authored## What changes were proposed in this pull request? This PR tries to fix all typos in all markdown files under `docs` module, and fixes similar typos in other comments, too. ## How was the this patch tested? manual tests. Author: Dongjoon Hyun <dongjoon@apache.org> Closes #11300 from dongjoon-hyun/minor_fix_typos.
layout: global
displayTitle: Spark SQL, DataFrames and Datasets Guide
title: Spark SQL and DataFrames
- This will become a table of contents (this text will be scraped). {:toc}
Overview
Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. Internally, Spark SQL uses this extra information to perform extra optimizations. There are several ways to interact with Spark SQL including SQL, the DataFrames API and the Datasets API. When computing a result the same execution engine is used, independent of which API/language you are using to express the computation. This unification means that developers can easily switch back and forth between the various APIs based on which provides the most natural way to express a given transformation.
All of the examples on this page use sample data included in the Spark distribution and can be run in
the spark-shell
, pyspark
shell, or sparkR
shell.
SQL
One use of Spark SQL is to execute SQL queries written using either a basic SQL syntax or HiveQL. Spark SQL can also be used to read data from an existing Hive installation. For more on how to configure this feature, please refer to the Hive Tables section. When running SQL from within another programming language the results will be returned as a DataFrame. You can also interact with the SQL interface using the command-line or over JDBC/ODBC.
DataFrames
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.
The DataFrame API is available in Scala, Java, Python, and R.
Datasets
A Dataset is a new experimental interface added in Spark 1.6 that tries to provide the benefits of RDDs (strong typing, ability to use powerful lambda functions) with the benefits of Spark SQL's optimized execution engine. A Dataset can be constructed from JVM objects and then manipulated using functional transformations (map, flatMap, filter, etc.).
The unified Dataset API can be used both in Scala and
Java. Python does not yet have support for
the Dataset API, but due to its dynamic nature many of the benefits are already available (i.e. you can
access the field of a row by name naturally row.columnName
). Full python support will be added
in a future release.
Getting Started
Starting Point: SQLContext
The entry point into all functionality in Spark SQL is the
SQLContext
class, or one of its
descendants. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight scala %} val sc: SparkContext // An existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// this is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._ {% endhighlight %}
The entry point into all functionality in Spark SQL is the
SQLContext
class, or one of its
descendants. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight java %} JavaSparkContext sc = ...; // An existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc); {% endhighlight %}
The entry point into all relational functionality in Spark is the
SQLContext
class, or one
of its decedents. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight python %} from pyspark.sql import SQLContext sqlContext = SQLContext(sc) {% endhighlight %}
The entry point into all relational functionality in Spark is the
SQLContext
class, or one of its decedents. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight r %} sqlContext <- sparkRSQL.init(sc) {% endhighlight %}
In addition to the basic SQLContext
, you can also create a HiveContext
, which provides a
superset of the functionality provided by the basic SQLContext
. Additional features include
the ability to write queries using the more complete HiveQL parser, access to Hive UDFs, and the
ability to read data from Hive tables. To use a HiveContext
, you do not need to have an
existing Hive setup, and all of the data sources available to a SQLContext
are still available.
HiveContext
is only packaged separately to avoid including all of Hive's dependencies in the default
Spark build. If these dependencies are not a problem for your application then using HiveContext
is recommended for the 1.3 release of Spark. Future releases will focus on bringing SQLContext
up
to feature parity with a HiveContext
.
The specific variant of SQL that is used to parse queries can also be selected using the
spark.sql.dialect
option. This parameter can be changed using either the setConf
method on
a SQLContext
or by using a SET key=value
command in SQL. For a SQLContext
, the only dialect
available is "sql" which uses a simple SQL parser provided by Spark SQL. In a HiveContext
, the
default is "hiveql", though "sql" is also available. Since the HiveQL parser is much more complete,
this is recommended for most use cases.
Creating DataFrames
With a SQLContext
, applications can create DataFrame
s from an existing RDD
, from a Hive table, or from data sources.