Skip to content
Snippets Groups Projects
  1. Aug 03, 2014
    • Michael Armbrust's avatar
      [SPARK-2784][SQL] Deprecate hql() method in favor of a config option, 'spark.sql.dialect' · 236dfac6
      Michael Armbrust authored
      Many users have reported being confused by the distinction between the `sql` and `hql` methods.  Specifically, many users think that `sql(...)` cannot be used to read hive tables.  In this PR I introduce a new configuration option `spark.sql.dialect` that picks which dialect with be used for parsing.  For SQLContext this must be set to `sql`.  In `HiveContext` it defaults to `hiveql` but can also be set to `sql`.
      
      The `hql` and `hiveql` methods continue to act the same but are now marked as deprecated.
      
      **This is a possibly breaking change for some users unless they set the dialect manually, though this is unlikely.**
      
      For example: `hiveContex.sql("SELECT 1")` will now throw a parsing exception by default.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1746 from marmbrus/sqlLanguageConf and squashes the following commits:
      
      ad375cc [Michael Armbrust] Merge remote-tracking branch 'apache/master' into sqlLanguageConf
      20c43f8 [Michael Armbrust] override function instead of just setting the value
      7e4ae93 [Michael Armbrust] Deprecate hql() method in favor of a config option, 'spark.sql.dialect'
      236dfac6
  2. Aug 02, 2014
    • Michael Armbrust's avatar
      [SPARK-2739][SQL] Rename registerAsTable to registerTempTable · 1a804373
      Michael Armbrust authored
      There have been user complaints that the difference between `registerAsTable` and `saveAsTable` is too subtle.  This PR addresses this by renaming `registerAsTable` to `registerTempTable`, which more clearly reflects what is happening.  `registerAsTable` remains, but will cause a deprecation warning.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1743 from marmbrus/registerTempTable and squashes the following commits:
      
      d031348 [Michael Armbrust] Merge remote-tracking branch 'apache/master' into registerTempTable
      4dff086 [Michael Armbrust] Fix .java files too
      89a2f12 [Michael Armbrust] Merge remote-tracking branch 'apache/master' into registerTempTable
      0b7b71e [Michael Armbrust] Rename registerAsTable to registerTempTable
      1a804373
    • Yin Huai's avatar
      [SPARK-2797] [SQL] SchemaRDDs don't support unpersist() · d210022e
      Yin Huai authored
      The cause is explained in https://issues.apache.org/jira/browse/SPARK-2797.
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1745 from yhuai/SPARK-2797 and squashes the following commits:
      
      7b1627d [Yin Huai] The unpersist method of the Scala RDD cannot be called without the input parameter (blocking) from PySpark.
      d210022e
    • Michael Armbrust's avatar
      [SPARK-2097][SQL] UDF Support · 158ad0bb
      Michael Armbrust authored
      This patch adds the ability to register lambda functions written in Python, Java or Scala as UDFs for use in SQL or HiveQL.
      
      Scala:
      ```scala
      registerFunction("strLenScala", (_: String).length)
      sql("SELECT strLenScala('test')")
      ```
      Python:
      ```python
      sqlCtx.registerFunction("strLenPython", lambda x: len(x), IntegerType())
      sqlCtx.sql("SELECT strLenPython('test')")
      ```
      Java:
      ```java
      sqlContext.registerFunction("stringLengthJava", new UDF1<String, Integer>() {
        Override
        public Integer call(String str) throws Exception {
          return str.length();
        }
      }, DataType.IntegerType);
      
      sqlContext.sql("SELECT stringLengthJava('test')");
      ```
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1063 from marmbrus/udfs and squashes the following commits:
      
      9eda0fe [Michael Armbrust] newline
      747c05e [Michael Armbrust] Add some scala UDF tests.
      d92727d [Michael Armbrust] Merge remote-tracking branch 'apache/master' into udfs
      005d684 [Michael Armbrust] Fix naming and formatting.
      d14dac8 [Michael Armbrust] Fix last line of autogened java files.
      8135c48 [Michael Armbrust] Move UDF unit tests to pyspark.
      40b0ffd [Michael Armbrust] Merge remote-tracking branch 'apache/master' into udfs
      6a36890 [Michael Armbrust] Switch logging so that SQLContext can be serializable.
      7a83101 [Michael Armbrust] Drop toString
      795fd15 [Michael Armbrust] Try to avoid capturing SQLContext.
      e54fb45 [Michael Armbrust] Docs and tests.
      437cbe3 [Michael Armbrust] Update use of dataTypes, fix some python tests, address review comments.
      01517d6 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into udfs
      8e6c932 [Michael Armbrust] WIP
      3f96a52 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into udfs
      6237c8d [Michael Armbrust] WIP
      2766f0b [Michael Armbrust] Move udfs support to SQL from hive. Add support for Java UDFs.
      0f7d50c [Michael Armbrust] Draft of native Spark SQL UDFs for Scala and Python.
      158ad0bb
  3. Aug 01, 2014
    • Davies Liu's avatar
      [SPARK-2010] [PySpark] [SQL] support nested structure in SchemaRDD · 880eabec
      Davies Liu authored
      Convert Row in JavaSchemaRDD into Array[Any] and unpickle them as tuple in Python, then convert them into namedtuple, so use can access fields just like attributes.
      
      This will let nested structure can be accessed as object, also it will reduce the size of serialized data and better performance.
      
      root
       |-- field1: integer (nullable = true)
       |-- field2: string (nullable = true)
       |-- field3: struct (nullable = true)
       |    |-- field4: integer (nullable = true)
       |    |-- field5: array (nullable = true)
       |    |    |-- element: integer (containsNull = false)
       |-- field6: array (nullable = true)
       |    |-- element: struct (containsNull = false)
       |    |    |-- field7: string (nullable = true)
      
      Then we can access them by row.field3.field5[0]  or row.field6[5].field7
      
      It also will infer the schema in Python, convert Row/dict/namedtuple/objects into tuple before serialization, then call applySchema in JVM. During inferSchema(), the top level of dict in row will be StructType, but any nested dictionary will be MapType.
      
      You can use pyspark.sql.Row to convert unnamed structure into Row object, make the RDD can be inferable. Such as:
      
      ctx.inferSchema(rdd.map(lambda x: Row(a=x[0], b=x[1]))
      
      Or you could use Row to create a class just like namedtuple, for example:
      
      Person = Row("name", "age")
      ctx.inferSchema(rdd.map(lambda x: Person(*x)))
      
      Also, you can call applySchema to apply an schema to a RDD of tuple/list and turn it into a SchemaRDD. The `schema` should be StructType, see the API docs for details.
      
      schema = StructType([StructField("name, StringType, True),
                                          StructType("age", IntegerType, True)])
      ctx.applySchema(rdd, schema)
      
      PS: In order to use namedtuple to inferSchema, you should make namedtuple picklable.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1598 from davies/nested and squashes the following commits:
      
      f1d15b6 [Davies Liu] verify schema with the first few rows
      8852aaf [Davies Liu] check type of schema
      abe9e6e [Davies Liu] address comments
      61b2292 [Davies Liu] add @deprecated to pythonToJavaMap
      1e5b801 [Davies Liu] improve cache of classes
      51aa135 [Davies Liu] use Row to infer schema
      e9c0d5c [Davies Liu] remove string typed schema
      353a3f2 [Davies Liu] fix code style
      63de8f8 [Davies Liu] fix typo
      c79ca67 [Davies Liu] fix serialization of nested data
      6b258b5 [Davies Liu] fix pep8
      9d8447c [Davies Liu] apply schema provided by string of names
      f5df97f [Davies Liu] refactor, address comments
      9d9af55 [Davies Liu] use arrry to applySchema and infer schema in Python
      84679b3 [Davies Liu] Merge branch 'master' of github.com:apache/spark into nested
      0eaaf56 [Davies Liu] fix doc tests
      b3559b4 [Davies Liu] use generated Row instead of namedtuple
      c4ddc30 [Davies Liu] fix conflict between name of fields and variables
      7f6f251 [Davies Liu] address all comments
      d69d397 [Davies Liu] refactor
      2cc2d45 [Davies Liu] refactor
      182fb46 [Davies Liu] refactor
      bc6e9e1 [Davies Liu] switch to new Schema API
      547bf3e [Davies Liu] Merge branch 'master' into nested
      a435b5a [Davies Liu] add docs and code refactor
      2c8debc [Davies Liu] Merge branch 'master' into nested
      644665a [Davies Liu] use tuple and namedtuple for schemardd
      880eabec
  4. Jul 31, 2014
    • Michael Armbrust's avatar
      [SPARK-2397][SQL] Deprecate LocalHiveContext · 72cfb139
      Michael Armbrust authored
      LocalHiveContext is redundant with HiveContext.  The only difference is it creates `./metastore` instead of `./metastore_db`.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1641 from marmbrus/localHiveContext and squashes the following commits:
      
      e5ec497 [Michael Armbrust] Add deprecation version
      626e056 [Michael Armbrust] Don't remove from imports yet
      905cc5f [Michael Armbrust] Merge remote-tracking branch 'apache/master' into localHiveContext
      1c2727e [Michael Armbrust] Deprecate LocalHiveContext
      72cfb139
  5. Jul 30, 2014
    • Yin Huai's avatar
      [SPARK-2179][SQL] Public API for DataTypes and Schema · 7003c163
      Yin Huai authored
      The current PR contains the following changes:
      * Expose `DataType`s in the sql package (internal details are private to sql).
      * Users can create Rows.
      * Introduce `applySchema` to create a `SchemaRDD` by applying a `schema: StructType` to an `RDD[Row]`.
      * Add a function `simpleString` to every `DataType`. Also, the schema represented by a `StructType` can be visualized by `printSchema`.
      * `ScalaReflection.typeOfObject` provides a way to infer the Catalyst data type based on an object. Also, we can compose `typeOfObject` with some custom logics to form a new function to infer the data type (for different use cases).
      * `JsonRDD` has been refactored to use changes introduced by this PR.
      * Add a field `containsNull` to `ArrayType`. So, we can explicitly mark if an `ArrayType` can contain null values. The default value of `containsNull` is `false`.
      
      New APIs are introduced in the sql package object and SQLContext. You can find the scaladoc at
      [sql package object](http://yhuai.github.io/site/api/scala/index.html#org.apache.spark.sql.package) and [SQLContext](http://yhuai.github.io/site/api/scala/index.html#org.apache.spark.sql.SQLContext).
      
      An example of using `applySchema` is shown below.
      ```scala
      import org.apache.spark.sql._
      val sqlContext = new org.apache.spark.sql.SQLContext(sc)
      
      val schema =
        StructType(
          StructField("name", StringType, false) ::
          StructField("age", IntegerType, true) :: Nil)
      
      val people = sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Row(p(0), p(1).trim.toInt))
      val peopleSchemaRDD = sqlContext. applySchema(people, schema)
      peopleSchemaRDD.printSchema
      // root
      // |-- name: string (nullable = false)
      // |-- age: integer (nullable = true)
      
      peopleSchemaRDD.registerAsTable("people")
      sqlContext.sql("select name from people").collect.foreach(println)
      ```
      
      I will add new contents to the SQL programming guide later.
      
      JIRA: https://issues.apache.org/jira/browse/SPARK-2179
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1346 from yhuai/dataTypeAndSchema and squashes the following commits:
      
      1d45977 [Yin Huai] Clean up.
      a6e08b4 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      c712fbf [Yin Huai] Converts types of values based on defined schema.
      4ceeb66 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      e5f8df5 [Yin Huai] Scaladoc.
      122d1e7 [Yin Huai] Address comments.
      03bfd95 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      2476ed0 [Yin Huai] Minor updates.
      ab71f21 [Yin Huai] Format.
      fc2bed1 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      bd40a33 [Yin Huai] Address comments.
      991f860 [Yin Huai] Move "asJavaDataType" and "asScalaDataType" to DataTypeConversions.scala.
      1cb35fe [Yin Huai] Add "valueContainsNull" to MapType.
      3edb3ae [Yin Huai] Python doc.
      692c0b9 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      1d93395 [Yin Huai] Python APIs.
      246da96 [Yin Huai] Add java data type APIs to javadoc index.
      1db9531 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      d48fc7b [Yin Huai] Minor updates.
      33c4fec [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      b9f3071 [Yin Huai] Java API for applySchema.
      1c9f33c [Yin Huai] Java APIs for DataTypes and Row.
      624765c [Yin Huai] Tests for applySchema.
      aa92e84 [Yin Huai] Update data type tests.
      8da1a17 [Yin Huai] Add Row.fromSeq.
      9c99bc0 [Yin Huai] Several minor updates.
      1d9c13a [Yin Huai] Update applySchema API.
      85e9b51 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      e495e4e [Yin Huai] More comments.
      42d47a3 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      c3f4a02 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      2e58dbd [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      b8b7db4 [Yin Huai] 1. Move sql package object and package-info to sql-core. 2. Minor updates on APIs. 3. Update scala doc.
      68525a2 [Yin Huai] Update JSON unit test.
      3209108 [Yin Huai] Add unit tests.
      dcaf22f [Yin Huai] Add a field containsNull to ArrayType to indicate if an array can contain null values or not. If an ArrayType is constructed by "ArrayType(elementType)" (the existing constructor), the value of containsNull is false.
      9168b83 [Yin Huai] Update comments.
      fc649d7 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      eca7d04 [Yin Huai] Add two apply methods which will be used to extract StructField(s) from a StructType.
      949d6bb [Yin Huai] When creating a SchemaRDD for a JSON dataset, users can apply an existing schema.
      7a6a7e5 [Yin Huai] Fix bug introduced by the change made on SQLContext.inferSchema.
      43a45e1 [Yin Huai] Remove sql.util.package introduced in a previous commit.
      0266761 [Yin Huai] Format
      03eec4c [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      90460ac [Yin Huai] Infer the Catalyst data type from an object and cast a data value to the expected type.
      3fa0df5 [Yin Huai] Provide easier ways to construct a StructType.
      16be3e5 [Yin Huai] This commit contains three changes: * Expose `DataType`s in the sql package (internal details are private to sql). * Introduce `createSchemaRDD` to create a `SchemaRDD` from an `RDD` with a provided schema (represented by a `StructType`) and a provided function to construct `Row`, * Add a function `simpleString` to every `DataType`. Also, the schema represented by a `StructType` can be visualized by `printSchema`.
      7003c163
  6. Jul 29, 2014
    • Davies Liu's avatar
      [SPARK-2674] [SQL] [PySpark] support datetime type for SchemaRDD · f0d880e2
      Davies Liu authored
      Datetime and time in Python will be converted into java.util.Calendar after serialization, it will be converted into java.sql.Timestamp during inferSchema().
      
      In javaToPython(), Timestamp will be converted into Calendar, then be converted into datetime in Python after pickling.
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #1601 from davies/date and squashes the following commits:
      
      f0599b0 [Davies Liu] remove tests for sets and tuple in sql, fix list of list
      c9d607a [Davies Liu] convert datetype for runtime
      709d40d [Davies Liu] remove brackets
      96db384 [Davies Liu] support datetime type for SchemaRDD
      f0d880e2
  7. Jul 22, 2014
    • Nicholas Chammas's avatar
      [SPARK-2470] PEP8 fixes to PySpark · 5d16d5bb
      Nicholas Chammas authored
      This pull request aims to resolve all outstanding PEP8 violations in PySpark.
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      Author: nchammas <nicholas.chammas@gmail.com>
      
      Closes #1505 from nchammas/master and squashes the following commits:
      
      98171af [Nicholas Chammas] [SPARK-2470] revert PEP 8 fixes to cloudpickle
      cba7768 [Nicholas Chammas] [SPARK-2470] wrap expression list in parentheses
      e178dbe [Nicholas Chammas] [SPARK-2470] style - change position of line break
      9127d2b [Nicholas Chammas] [SPARK-2470] wrap expression lists in parentheses
      22132a4 [Nicholas Chammas] [SPARK-2470] wrap conditionals in parentheses
      24639bc [Nicholas Chammas] [SPARK-2470] fix whitespace for doctest
      7d557b7 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to tests.py
      8f8e4c0 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to storagelevel.py
      b3b96cf [Nicholas Chammas] [SPARK-2470] PEP8 fixes to statcounter.py
      d644477 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to worker.py
      aa3a7b6 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to sql.py
      1916859 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to shell.py
      95d1d95 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to serializers.py
      a0fec2e [Nicholas Chammas] [SPARK-2470] PEP8 fixes to mllib
      c85e1e5 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to join.py
      d14f2f1 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to __init__.py
      81fcb20 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to resultiterable.py
      1bde265 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to java_gateway.py
      7fc849c [Nicholas Chammas] [SPARK-2470] PEP8 fixes to daemon.py
      ca2d28b [Nicholas Chammas] [SPARK-2470] PEP8 fixes to context.py
      f4e0039 [Nicholas Chammas] [SPARK-2470] PEP8 fixes to conf.py
      a6d5e4b [Nicholas Chammas] [SPARK-2470] PEP8 fixes to cloudpickle.py
      f0a7ebf [Nicholas Chammas] [SPARK-2470] PEP8 fixes to rddsampler.py
      4dd148f [nchammas] Merge pull request #5 from apache/master
      f7e4581 [Nicholas Chammas] unrelated pep8 fix
      a36eed0 [Nicholas Chammas] name ec2 instances and security groups consistently
      de7292a [nchammas] Merge pull request #4 from apache/master
      2e4fe00 [nchammas] Merge pull request #3 from apache/master
      89fde08 [nchammas] Merge pull request #2 from apache/master
      69f6e22 [Nicholas Chammas] PEP8 fixes
      2627247 [Nicholas Chammas] broke up lines before they hit 100 chars
      6544b7e [Nicholas Chammas] [SPARK-2065] give launched instances names
      69da6cf [nchammas] Merge pull request #1 from apache/master
      5d16d5bb
  8. Jul 07, 2014
  9. Jun 17, 2014
    • Yin Huai's avatar
      [SPARK-2060][SQL] Querying JSON Datasets with SQL and DSL in Spark SQL · d2f4f30b
      Yin Huai authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-2060
      
      Programming guide: http://yhuai.github.io/site/sql-programming-guide.html
      
      Scala doc of SQLContext: http://yhuai.github.io/site/api/scala/index.html#org.apache.spark.sql.SQLContext
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #999 from yhuai/newJson and squashes the following commits:
      
      227e89e [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      ce8eedd [Yin Huai] rxin's comments.
      bc9ac51 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      94ffdaa [Yin Huai] Remove "get" from method names.
      ce31c81 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      e2773a6 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      79ea9ba [Yin Huai] Fix typos.
      5428451 [Yin Huai] Newline
      1f908ce [Yin Huai] Remove extra line.
      d7a005c [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      7ea750e [Yin Huai] marmbrus's comments.
      6a5f5ef [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      83013fb [Yin Huai] Update Java Example.
      e7a6c19 [Yin Huai] SchemaRDD.javaToPython should convert a field with the StructType to a Map.
      6d20b85 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      4fbddf0 [Yin Huai] Programming guide.
      9df8c5a [Yin Huai] Python API.
      7027634 [Yin Huai] Java API.
      cff84cc [Yin Huai] Use a SchemaRDD for a JSON dataset.
      d0bd412 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      ab810b0 [Yin Huai] Make JsonRDD private.
      6df0891 [Yin Huai] Apache header.
      8347f2e [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      66f9e76 [Yin Huai] Update docs and use the entire dataset to infer the schema.
      8ffed79 [Yin Huai] Update the example.
      a5a4b52 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      4325475 [Yin Huai] If a sampled dataset is used for schema inferring, update the schema of the JsonTable after first execution.
      65b87f0 [Yin Huai] Fix sampling...
      8846af5 [Yin Huai] API doc.
      52a2275 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      0387523 [Yin Huai] Address PR comments.
      666b957 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      a2313a6 [Yin Huai] Address PR comments.
      f3ce176 [Yin Huai] After type conflict resolution, if a NullType is found, StringType is used.
      0576406 [Yin Huai] Add Apache license header.
      af91b23 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      f45583b [Yin Huai] Infer the schema of a JSON dataset (a text file with one JSON object per line or a RDD[String] with one JSON object per string) and returns a SchemaRDD.
      f31065f [Yin Huai] A query plan or a SchemaRDD can print out its schema.
      d2f4f30b
  10. Jun 16, 2014
    • Kan Zhang's avatar
      [SPARK-2010] Support for nested data in PySpark SQL · 4fdb4917
      Kan Zhang authored
      JIRA issue https://issues.apache.org/jira/browse/SPARK-2010
      
      This PR adds support for nested collection types in PySpark SQL, including
      array, dict, list, set, and tuple. Example,
      
      ```
      >>> from array import array
      >>> from pyspark.sql import SQLContext
      >>> sqlCtx = SQLContext(sc)
      >>> rdd = sc.parallelize([
      ...         {"f1" : array('i', [1, 2]), "f2" : {"row1" : 1.0}},
      ...         {"f1" : array('i', [2, 3]), "f2" : {"row2" : 2.0}}])
      >>> srdd = sqlCtx.inferSchema(rdd)
      >>> srdd.collect() == [{"f1" : array('i', [1, 2]), "f2" : {"row1" : 1.0}},
      ...                    {"f1" : array('i', [2, 3]), "f2" : {"row2" : 2.0}}]
      True
      >>> rdd = sc.parallelize([
      ...         {"f1" : [[1, 2], [2, 3]], "f2" : set([1, 2]), "f3" : (1, 2)},
      ...         {"f1" : [[2, 3], [3, 4]], "f2" : set([2, 3]), "f3" : (2, 3)}])
      >>> srdd = sqlCtx.inferSchema(rdd)
      >>> srdd.collect() == \
      ... [{"f1" : [[1, 2], [2, 3]], "f2" : set([1, 2]), "f3" : (1, 2)},
      ...  {"f1" : [[2, 3], [3, 4]], "f2" : set([2, 3]), "f3" : (2, 3)}]
      True
      ```
      
      Author: Kan Zhang <kzhang@apache.org>
      
      Closes #1041 from kanzhang/SPARK-2010 and squashes the following commits:
      
      1b2891d [Kan Zhang] [SPARK-2010] minor doc change and adding a TODO
      504f27e [Kan Zhang] [SPARK-2010] Support for nested data in PySpark SQL
      4fdb4917
  11. Jun 14, 2014
    • Kan Zhang's avatar
      [SPARK-2079] Support batching when serializing SchemaRDD to Python · 2550533a
      Kan Zhang authored
      Added batching with default batch size 10 in SchemaRDD.javaToPython
      
      Author: Kan Zhang <kzhang@apache.org>
      
      Closes #1023 from kanzhang/SPARK-2079 and squashes the following commits:
      
      2d1915e [Kan Zhang] [SPARK-2079] Add batching in SchemaRDD.javaToPython
      19b0c09 [Kan Zhang] [SPARK-2079] Removing unnecessary wrapping in SchemaRDD.javaToPython
      2550533a
  12. Jun 11, 2014
    • Patrick Wendell's avatar
      HOTFIX: PySpark tests should be order insensitive. · 14e6dc94
      Patrick Wendell authored
      This has been messing up the SQL PySpark tests on Jenkins.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #1054 from pwendell/pyspark and squashes the following commits:
      
      1eb5487 [Patrick Wendell] False change
      06f062d [Patrick Wendell] HOTFIX: PySpark tests should be order insensitive
      14e6dc94
  13. May 25, 2014
    • Reynold Xin's avatar
      Python docstring update for sql.py. · 14f0358b
      Reynold Xin authored
      Mostly related to the following two rules in PEP8 and PEP257:
      - Line length < 72 chars.
      - First line should be a concise description of the function/class.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #869 from rxin/docstring-schemardd and squashes the following commits:
      
      7cf0cbc [Reynold Xin] Updated sql.py for pep8 docstring.
      0a4aef9 [Reynold Xin] Merge branch 'master' into docstring-schemardd
      6678937 [Reynold Xin] Python docstring update for sql.py.
      14f0358b
    • Reynold Xin's avatar
      SPARK-1822: Some minor cleanup work on SchemaRDD.count() · d66642e3
      Reynold Xin authored
      Minor cleanup following #841.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #868 from rxin/schema-count and squashes the following commits:
      
      5442651 [Reynold Xin] SPARK-1822: Some minor cleanup work on SchemaRDD.count()
      d66642e3
    • Kan Zhang's avatar
      [SPARK-1822] SchemaRDD.count() should use query optimizer · 6052db9d
      Kan Zhang authored
      Author: Kan Zhang <kzhang@apache.org>
      
      Closes #841 from kanzhang/SPARK-1822 and squashes the following commits:
      
      2f8072a [Kan Zhang] [SPARK-1822] Minor style update
      cf4baa4 [Kan Zhang] [SPARK-1822] Adding Scaladoc
      e67c910 [Kan Zhang] [SPARK-1822] SchemaRDD.count() should use optimizer
      6052db9d
  14. May 13, 2014
  15. May 07, 2014
    • Kan Zhang's avatar
      [SPARK-1460] Returning SchemaRDD instead of normal RDD on Set operations... · 967635a2
      Kan Zhang authored
      ... that do not change schema
      
      Author: Kan Zhang <kzhang@apache.org>
      
      Closes #448 from kanzhang/SPARK-1460 and squashes the following commits:
      
      111e388 [Kan Zhang] silence MiMa errors in EdgeRDD and VertexRDD
      91dc787 [Kan Zhang] Taking into account newly added Ordering param
      79ed52a [Kan Zhang] [SPARK-1460] Returning SchemaRDD on Set operations that do not change schema
      967635a2
  16. Apr 29, 2014
    • Michael Armbrust's avatar
      Minor fix to python table caching API. · 497be3ca
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #585 from marmbrus/pythonCacheTable and squashes the following commits:
      
      7ec1f91 [Michael Armbrust] Minor fix to python table caching API.
      497be3ca
  17. Apr 19, 2014
    • Michael Armbrust's avatar
      Add insertInto and saveAsTable to Python API. · 10d04213
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #447 from marmbrus/pythonInsert and squashes the following commits:
      
      c7ab692 [Michael Armbrust] Keep docstrings < 72 chars.
      ff62870 [Michael Armbrust] Add insertInto and saveAsTable to Python API.
      10d04213
  18. Apr 15, 2014
    • Michael Armbrust's avatar
      [SQL] SPARK-1424 Generalize insertIntoTable functions on SchemaRDDs · 273c2fd0
      Michael Armbrust authored
      This makes it possible to create tables and insert into them using the DSL and SQL for the scala and java apis.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #354 from marmbrus/insertIntoTable and squashes the following commits:
      
      6c6f227 [Michael Armbrust] Create random temporary files in python parquet unit tests.
      f5e6d5c [Michael Armbrust] Merge remote-tracking branch 'origin/master' into insertIntoTable
      765c506 [Michael Armbrust] Add to JavaAPI.
      77b512c [Michael Armbrust] typos.
      5c3ef95 [Michael Armbrust] use names for boolean args.
      882afdf [Michael Armbrust] Change createTableAs to saveAsTable.  Clean up api annotations.
      d07d94b [Michael Armbrust] Add tests, support for creating parquet files and hive tables.
      fa3fe81 [Michael Armbrust] Make insertInto available on JavaSchemaRDD as well.  Add createTableAs function.
      273c2fd0
    • Ahir Reddy's avatar
      SPARK-1374: PySpark API for SparkSQL · c99bcb7f
      Ahir Reddy authored
      An initial API that exposes SparkSQL functionality in PySpark. A PythonRDD composed of dictionaries, with string keys and primitive values (boolean, float, int, long, string) can be converted into a SchemaRDD that supports sql queries.
      
      ```
      from pyspark.context import SQLContext
      sqlCtx = SQLContext(sc)
      rdd = sc.parallelize([{"field1" : 1, "field2" : "row1"}, {"field1" : 2, "field2": "row2"}, {"field1" : 3, "field2": "row3"}])
      srdd = sqlCtx.applySchema(rdd)
      sqlCtx.registerRDDAsTable(srdd, "table1")
      srdd2 = sqlCtx.sql("SELECT field1 AS f1, field2 as f2 from table1")
      srdd2.collect()
      ```
      The last line yields ```[{"f1" : 1, "f2" : "row1"}, {"f1" : 2, "f2": "row2"}, {"f1" : 3, "f2": "row3"}]```
      
      Author: Ahir Reddy <ahirreddy@gmail.com>
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #363 from ahirreddy/pysql and squashes the following commits:
      
      0294497 [Ahir Reddy] Updated log4j properties to supress Hive Warns
      307d6e0 [Ahir Reddy] Style fix
      6f7b8f6 [Ahir Reddy] Temporary fix MIMA checker. Since we now assemble Spark jar with Hive, we don't want to check the interfaces of all of our hive dependencies
      3ef074a [Ahir Reddy] Updated documentation because classes moved to sql.py
      29245bf [Ahir Reddy] Cache underlying SchemaRDD instead of generating and caching PythonRDD
      f2312c7 [Ahir Reddy] Moved everything into sql.py
      a19afe4 [Ahir Reddy] Doc fixes
      6d658ba [Ahir Reddy] Remove the metastore directory created by the HiveContext tests in SparkSQL
      521ff6d [Ahir Reddy] Trying to get spark to build with hive
      ab95eba [Ahir Reddy] Set SPARK_HIVE=true on jenkins
      ded03e7 [Ahir Reddy] Added doc test for HiveContext
      22de1d4 [Ahir Reddy] Fixed maven pyrolite dependency
      e4da06c [Ahir Reddy] Display message if hive is not built into spark
      227a0be [Michael Armbrust] Update API links. Fix Hive example.
      58e2aa9 [Michael Armbrust] Build Docs for pyspark SQL Api.  Minor fixes.
      4285340 [Michael Armbrust] Fix building of Hive API Docs.
      38a92b0 [Michael Armbrust] Add note to future non-python developers about python docs.
      337b201 [Ahir Reddy] Changed com.clearspring.analytics stream version from 2.4.0 to 2.5.1 to match SBT build, and added pyrolite to maven build
      40491c9 [Ahir Reddy] PR Changes + Method Visibility
      1836944 [Michael Armbrust] Fix comments.
      e00980f [Michael Armbrust] First draft of python sql programming guide.
      b0192d3 [Ahir Reddy] Added Long, Double and Boolean as usable types + unit test
      f98a422 [Ahir Reddy] HiveContexts
      79621cf [Ahir Reddy] cleaning up cruft
      b406ba0 [Ahir Reddy] doctest formatting
      20936a5 [Ahir Reddy] Added tests and documentation
      e4d21b4 [Ahir Reddy] Added pyrolite dependency
      79f739d [Ahir Reddy] added more tests
      7515ba0 [Ahir Reddy] added more tests :)
      d26ec5e [Ahir Reddy] added test
      e9f5b8d [Ahir Reddy] adding tests
      906d180 [Ahir Reddy] added todo explaining cost of creating Row object in python
      251f99d [Ahir Reddy] for now only allow dictionaries as input
      09b9980 [Ahir Reddy] made jrdd explicitly lazy
      c608947 [Ahir Reddy] SchemaRDD now has all RDD operations
      725c91e [Ahir Reddy] awesome row objects
      55d1c76 [Ahir Reddy] return row objects
      4fe1319 [Ahir Reddy] output dictionaries correctly
      be079de [Ahir Reddy] returning dictionaries works
      cd5f79f [Ahir Reddy] Switched to using Scala SQLContext
      e948bd9 [Ahir Reddy] yippie
      4886052 [Ahir Reddy] even better
      c0fb1c6 [Ahir Reddy] more working
      043ca85 [Ahir Reddy] working
      5496f9f [Ahir Reddy] doesn't crash
      b8b904b [Ahir Reddy] Added schema rdd class
      67ba875 [Ahir Reddy] java to python, and python to java
      bcc0f23 [Ahir Reddy] Java to python
      ab6025d [Ahir Reddy] compiling
      c99bcb7f
Loading