Skip to content
Snippets Groups Projects
Commit aab0a1dd authored by Grega Kespret's avatar Grega Kespret Committed by Michael Armbrust
Browse files

Update docs to use jsonRDD instead of wrong jsonRdd.


Author: Grega Kespret <grega.kespret@gmail.com>

Closes #2479 from gregakespret/patch-1 and squashes the following commits:

dd6b90a [Grega Kespret] Update docs to use jsonRDD instead of wrong jsonRdd.

(cherry picked from commit 56dae30c)
Signed-off-by: default avatarMichael Armbrust <michael@databricks.com>
parent 32bb97fc
No related branches found
No related tags found
No related merge requests found
...@@ -605,7 +605,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a ...@@ -605,7 +605,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
This conversion can be done using one of two methods in a SQLContext: This conversion can be done using one of two methods in a SQLContext:
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object. * `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object. * `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
{% highlight scala %} {% highlight scala %}
// sc is an existing SparkContext. // sc is an existing SparkContext.
...@@ -643,7 +643,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a ...@@ -643,7 +643,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
This conversion can be done using one of two methods in a JavaSQLContext : This conversion can be done using one of two methods in a JavaSQLContext :
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object. * `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object. * `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
{% highlight java %} {% highlight java %}
// sc is an existing JavaSparkContext. // sc is an existing JavaSparkContext.
...@@ -681,7 +681,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a ...@@ -681,7 +681,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
This conversion can be done using one of two methods in a SQLContext: This conversion can be done using one of two methods in a SQLContext:
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object. * `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object. * `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
{% highlight python %} {% highlight python %}
# sc is an existing SparkContext. # sc is an existing SparkContext.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment