Skip to content
Snippets Groups Projects
  1. Oct 22, 2015
  2. Oct 19, 2015
  3. Sep 22, 2015
  4. Sep 18, 2015
  5. Sep 11, 2015
  6. Sep 02, 2015
    • 0x0FFF's avatar
      [SPARK-10417] [SQL] Iterating through Column results in infinite loop · 6cd98c18
      0x0FFF authored
      `pyspark.sql.column.Column` object has `__getitem__` method, which makes it iterable for Python. In fact it has `__getitem__` to address the case when the column might be a list or dict, for you to be able to access certain element of it in DF API. The ability to iterate over it is just a side effect that might cause confusion for the people getting familiar with Spark DF (as you might iterate this way on Pandas DF for instance)
      
      Issue reproduction:
      ```
      df = sqlContext.jsonRDD(sc.parallelize(['{"name": "El Magnifico"}']))
      for i in df["name"]: print i
      ```
      
      Author: 0x0FFF <programmerag@gmail.com>
      
      Closes #8574 from 0x0FFF/SPARK-10417.
      6cd98c18
  7. Sep 01, 2015
    • 0x0FFF's avatar
      [SPARK-10392] [SQL] Pyspark - Wrong DateType support on JDBC connection · 00d9af5e
      0x0FFF authored
      This PR addresses issue [SPARK-10392](https://issues.apache.org/jira/browse/SPARK-10392)
      The problem is that for "start of epoch" date (01 Jan 1970) PySpark class DateType returns 0 instead of the `datetime.date` due to implementation of its return statement
      
      Issue reproduction on master:
      ```
      >>> from pyspark.sql.types import *
      >>> a = DateType()
      >>> a.fromInternal(0)
      0
      >>> a.fromInternal(1)
      datetime.date(1970, 1, 2)
      ```
      
      Author: 0x0FFF <programmerag@gmail.com>
      
      Closes #8556 from 0x0FFF/SPARK-10392.
      00d9af5e
    • 0x0FFF's avatar
      [SPARK-10162] [SQL] Fix the timezone omitting for PySpark Dataframe filter function · bf550a4b
      0x0FFF authored
      This PR addresses [SPARK-10162](https://issues.apache.org/jira/browse/SPARK-10162)
      The issue is with DataFrame filter() function, if datetime.datetime is passed to it:
      * Timezone information of this datetime is ignored
      * This datetime is assumed to be in local timezone, which depends on the OS timezone setting
      
      Fix includes both code change and regression test. Problem reproduction code on master:
      ```python
      import pytz
      from datetime import datetime
      from pyspark.sql import *
      from pyspark.sql.types import *
      sqc = SQLContext(sc)
      df = sqc.createDataFrame([], StructType([StructField("dt", TimestampType())]))
      
      m1 = pytz.timezone('UTC')
      m2 = pytz.timezone('Etc/GMT+3')
      
      df.filter(df.dt > datetime(2000, 01, 01, tzinfo=m1)).explain()
      df.filter(df.dt > datetime(2000, 01, 01, tzinfo=m2)).explain()
      ```
      It gives the same timestamp ignoring time zone:
      ```
      >>> df.filter(df.dt > datetime(2000, 01, 01, tzinfo=m1)).explain()
      Filter (dt#0 > 946713600000000)
       Scan PhysicalRDD[dt#0]
      
      >>> df.filter(df.dt > datetime(2000, 01, 01, tzinfo=m2)).explain()
      Filter (dt#0 > 946713600000000)
       Scan PhysicalRDD[dt#0]
      ```
      After the fix:
      ```
      >>> df.filter(df.dt > datetime(2000, 01, 01, tzinfo=m1)).explain()
      Filter (dt#0 > 946684800000000)
       Scan PhysicalRDD[dt#0]
      
      >>> df.filter(df.dt > datetime(2000, 01, 01, tzinfo=m2)).explain()
      Filter (dt#0 > 946695600000000)
       Scan PhysicalRDD[dt#0]
      ```
      PR [8536](https://github.com/apache/spark/pull/8536) was occasionally closed by me dropping the repo
      
      Author: 0x0FFF <programmerag@gmail.com>
      
      Closes #8555 from 0x0FFF/SPARK-10162.
      bf550a4b
  8. Aug 26, 2015
  9. Aug 19, 2015
  10. Aug 14, 2015
  11. Aug 08, 2015
    • Davies Liu's avatar
      [SPARK-6902] [SQL] [PYSPARK] Row should be read-only · ac507a03
      Davies Liu authored
      Raise an read-only exception when user try to mutable a Row.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #8009 from davies/readonly_row and squashes the following commits:
      
      8722f3f [Davies Liu] add tests
      05a3d36 [Davies Liu] Row should be read-only
      ac507a03
  12. Aug 06, 2015
  13. Jul 30, 2015
    • Davies Liu's avatar
      [SPARK-9116] [SQL] [PYSPARK] support Python only UDT in __main__ · e044705b
      Davies Liu authored
      Also we could create a Python UDT without having a Scala one, it's important for Python users.
      
      cc mengxr JoshRosen
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7453 from davies/class_in_main and squashes the following commits:
      
      4dfd5e1 [Davies Liu] add tests for Python and Scala UDT
      793d9b2 [Davies Liu] Merge branch 'master' of github.com:apache/spark into class_in_main
      dc65f19 [Davies Liu] address comment
      a9a3c40 [Davies Liu] Merge branch 'master' of github.com:apache/spark into class_in_main
      a86e1fc [Davies Liu] fix serialization
      ad528ba [Davies Liu] Merge branch 'master' of github.com:apache/spark into class_in_main
      63f52ef [Davies Liu] fix pylint check
      655b8a9 [Davies Liu] Merge branch 'master' of github.com:apache/spark into class_in_main
      316a394 [Davies Liu] support Python UDT with UTF
      0bcb3ef [Davies Liu] fix bug in mllib
      de986d6 [Davies Liu] fix test
      83d65ac [Davies Liu] fix bug in StructType
      55bb86e [Davies Liu] support Python UDT in __main__ (without Scala one)
      e044705b
  14. Jul 25, 2015
    • JD's avatar
      [Spark-8668][SQL] Adding expr to functions · 723db13e
      JD authored
      Author: JD <jd@csh.rit.edu>
      Author: Joseph Batchik <josephbatchik@gmail.com>
      
      Closes #7606 from JDrit/expr and squashes the following commits:
      
      ad7f607 [Joseph Batchik] fixing python linter error
      9d6daea [Joseph Batchik] removed order by per @rxin's comment
      707d5c6 [Joseph Batchik] Added expr to fuctions.py
      79df83c [JD] added example to the docs
      b89eec8 [JD] moved function up as per @rxin's comment
      4960909 [JD] updated per @JoshRosen's comment
      2cb329c [JD] updated per @rxin's comment
      9a9ad0c [JD] removing unused import
      6dc26d0 [JD] removed split
      7f2222c [JD] Adding expr function as per SPARK-8668
      723db13e
  15. Jul 20, 2015
  16. Jul 19, 2015
  17. Jul 10, 2015
  18. Jul 09, 2015
    • Davies Liu's avatar
      [SPARK-7902] [SPARK-6289] [SPARK-8685] [SQL] [PYSPARK] Refactor of... · c9e2ef52
      Davies Liu authored
      [SPARK-7902] [SPARK-6289] [SPARK-8685] [SQL] [PYSPARK] Refactor of serialization for Python DataFrame
      
      This PR fix the long standing issue of serialization between Python RDD and DataFrame, it change to using a customized Pickler for InternalRow to enable customized unpickling (type conversion, especially for UDT), now we can support UDT for UDF, cc mengxr .
      
      There is no generated `Row` anymore.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7301 from davies/sql_ser and squashes the following commits:
      
      81bef71 [Davies Liu] address comments
      e9217bd [Davies Liu] add regression tests
      db34167 [Davies Liu] Refactor of serialization for Python DataFrame
      c9e2ef52
  19. Jul 08, 2015
    • Davies Liu's avatar
      [SPARK-8450] [SQL] [PYSARK] cleanup type converter for Python DataFrame · 74d8d3d9
      Davies Liu authored
      This PR fixes the converter for Python DataFrame, especially for DecimalType
      
      Closes #7106
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7131 from davies/decimal_python and squashes the following commits:
      
      4d3c234 [Davies Liu] Merge branch 'master' of github.com:apache/spark into decimal_python
      20531d6 [Davies Liu] Merge branch 'master' of github.com:apache/spark into decimal_python
      7d73168 [Davies Liu] fix conflit
      6cdd86a [Davies Liu] Merge branch 'master' of github.com:apache/spark into decimal_python
      7104e97 [Davies Liu] improve type infer
      9cd5a21 [Davies Liu] run python tests with SPARK_PREPEND_CLASSES
      829a05b [Davies Liu] fix UDT in python
      c99e8c5 [Davies Liu] fix mima
      c46814a [Davies Liu] convert decimal for Python DataFrames
      74d8d3d9
  20. Jul 01, 2015
    • Davies Liu's avatar
      [SPARK-8766] support non-ascii character in column names · f958f27e
      Davies Liu authored
      Use UTF-8 to encode the name of column in Python 2, or it may failed to encode with default encoding ('ascii').
      
      This PR also fix a bug when there is Java exception without error message.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7165 from davies/non_ascii and squashes the following commits:
      
      02cb61a [Davies Liu] fix tests
      3b09d31 [Davies Liu] add encoding in header
      867754a [Davies Liu] support non-ascii character in column names
      f958f27e
  21. Jun 30, 2015
    • Davies Liu's avatar
      [SPARK-8738] [SQL] [PYSPARK] capture SQL AnalysisException in Python API · 58ee2a2e
      Davies Liu authored
      Capture the AnalysisException in SQL, hide the long java stack trace, only show the error message.
      
      cc rxin
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7135 from davies/ananylis and squashes the following commits:
      
      dad7ae7 [Davies Liu] add comment
      ec0c0e8 [Davies Liu] Update utils.py
      cdd7edd [Davies Liu] add doc
      7b044c2 [Davies Liu] fix python 3
      f84d3bd [Davies Liu] capture SQL AnalysisException in Python API
      58ee2a2e
  22. Jun 29, 2015
    • Ilya Ganelin's avatar
      [SPARK-8056][SQL] Design an easier way to construct schema for both Scala and Python · f6fc254e
      Ilya Ganelin authored
      I've added functionality to create new StructType similar to how we add parameters to a new SparkContext.
      
      I've also added tests for this type of creation.
      
      Author: Ilya Ganelin <ilya.ganelin@capitalone.com>
      
      Closes #6686 from ilganeli/SPARK-8056B and squashes the following commits:
      
      27c1de1 [Ilya Ganelin] Rename
      467d836 [Ilya Ganelin] Removed from_string in favor of _parse_Datatype_json_value
      5fef5a4 [Ilya Ganelin] Updates for type parsing
      4085489 [Ilya Ganelin] Style errors
      3670cf5 [Ilya Ganelin] added string to DataType conversion
      8109e00 [Ilya Ganelin] Fixed error in tests
      41ab686 [Ilya Ganelin] Fixed style errors
      e7ba7e0 [Ilya Ganelin] Moved some python tests to tests.py. Added cleaner handling of null data type and added test for correctness of input format
      15868fa [Ilya Ganelin] Fixed python errors
      b79b992 [Ilya Ganelin] Merge remote-tracking branch 'upstream/master' into SPARK-8056B
      a3369fc [Ilya Ganelin] Fixing space errors
      e240040 [Ilya Ganelin] Style
      bab7823 [Ilya Ganelin] Constructor error
      73d4677 [Ilya Ganelin] Style
      4ed00d9 [Ilya Ganelin] Fixed default arg
      67df57a [Ilya Ganelin] Removed Foo
      04cbf0c [Ilya Ganelin] Added comments for single object
      0484d7a [Ilya Ganelin] Restored second method
      6aeb740 [Ilya Ganelin] Style
      689e54d [Ilya Ganelin] Style
      f497e9e [Ilya Ganelin] Got rid of old code
      e3c7a88 [Ilya Ganelin] Fixed doctest failure
      a62ccde [Ilya Ganelin] Style
      966ac06 [Ilya Ganelin] style checks
      dabb7e6 [Ilya Ganelin] Added Python tests
      a3f4152 [Ilya Ganelin] added python bindings and better comments
      e6e536c [Ilya Ganelin] Added extra space
      7529a2e [Ilya Ganelin] Fixed formatting
      d388f86 [Ilya Ganelin] Fixed small bug
      c4e3bf5 [Ilya Ganelin] Reverted to using parse. Updated parse to support long
      d7634b6 [Ilya Ganelin] Reverted to fromString to properly support types
      22c39d5 [Ilya Ganelin] replaced FromString with DataTypeParser.parse. Replaced empty constructor initializing a null to have it instead create a new array to allow appends to it.
      faca398 [Ilya Ganelin] [SPARK-8056] Replaced default argument usage. Updated usage and code for DataType.fromString
      1acf76e [Ilya Ganelin] Scala style
      e31c674 [Ilya Ganelin] Fixed bug in test
      8dc0795 [Ilya Ganelin] Added tests for creation of StructType object with new methods
      fdf7e9f [Ilya Ganelin] [SPARK-8056] Created add methods to facilitate building new StructType objects.
      f6fc254e
    • Cheolsoo Park's avatar
      [SPARK-8355] [SQL] Python DataFrameReader/Writer should mirror Scala · ac2e17b0
      Cheolsoo Park authored
      I compared PySpark DataFrameReader/Writer against Scala ones. `Option` function is missing in both reader and writer, but the rest seems to all match.
      
      I added `Option` to reader and writer and updated the `pyspark-sql` test.
      
      Author: Cheolsoo Park <cheolsoop@netflix.com>
      
      Closes #7078 from piaozhexiu/SPARK-8355 and squashes the following commits:
      
      c63d419 [Cheolsoo Park] Fix version
      524e0aa [Cheolsoo Park] Add option function to df reader and writer
      ac2e17b0
  23. Jun 23, 2015
    • Davies Liu's avatar
      [SPARK-8573] [SPARK-8568] [SQL] [PYSPARK] raise Exception if column is used in booelan expression · 7fb5ae50
      Davies Liu authored
      It's a common mistake that user will put Column in a boolean expression (together with `and` , `or`), which does not work as expected, we should raise a exception in that case, and suggest user to use `&`, `|` instead.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6961 from davies/column_bool and squashes the following commits:
      
      9f19beb [Davies Liu] update message
      af74bd6 [Davies Liu] fix tests
      07dff84 [Davies Liu] address comments, fix tests
      f70c08e [Davies Liu] raise Exception if column is used in booelan expression
      7fb5ae50
  24. Jun 22, 2015
    • Yin Huai's avatar
      [SPARK-8532] [SQL] In Python's DataFrameWriter,... · 5ab9fcfb
      Yin Huai authored
      [SPARK-8532] [SQL] In Python's DataFrameWriter, save/saveAsTable/json/parquet/jdbc always override mode
      
      https://issues.apache.org/jira/browse/SPARK-8532
      
      This PR has two changes. First, it fixes the bug that save actions (i.e. `save/saveAsTable/json/parquet/jdbc`) always override mode. Second, it adds input argument `partitionBy` to `save/saveAsTable/parquet`.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #6937 from yhuai/SPARK-8532 and squashes the following commits:
      
      f972d5d [Yin Huai] davies's comment.
      d37abd2 [Yin Huai] style.
      d21290a [Yin Huai] Python doc.
      889eb25 [Yin Huai] Minor refactoring and add partitionBy to save, saveAsTable, and parquet.
      7fbc24b [Yin Huai] Use None instead of "error" as the default value of mode since JVM-side already uses "error" as the default value.
      d696dff [Yin Huai] Python style.
      88eb6c4 [Yin Huai] If mode is "error", do not call mode method.
      c40c461 [Yin Huai] Regression test.
      5ab9fcfb
  25. Jun 11, 2015
    • Davies Liu's avatar
      [SPARK-6411] [SQL] [PySpark] support date/datetime with timezone in Python · 424b0075
      Davies Liu authored
      Spark SQL does not support timezone, and Pyrolite does not support timezone well. This patch will convert datetime into POSIX timestamp (without confusing of timezone), which is used by SQL. If the datetime object does not have timezone, it's treated as local time.
      
      The timezone in RDD will be lost after one round trip, all the datetime from SQL will be local time.
      
      Because of Pyrolite, datetime from SQL only has precision as 1 millisecond.
      
      This PR also drop the timezone in date, convert it to number of days since epoch (used in SQL).
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6250 from davies/tzone and squashes the following commits:
      
      44d8497 [Davies Liu] add timezone support for DateType
      99d9d9c [Davies Liu] use int for timestamp
      10aa7ca [Davies Liu] Merge branch 'master' of github.com:apache/spark into tzone
      6a29aa4 [Davies Liu] support datetime with timezone
      424b0075
  26. Jun 03, 2015
    • animesh's avatar
      [SPARK-7980] [SQL] Support SQLContext.range(end) · d053a31b
      animesh authored
      1. range() overloaded in SQLContext.scala
      2. range() modified in python sql context.py
      3. Tests added accordingly in DataFrameSuite.scala and python sql tests.py
      
      Author: animesh <animesh@apache.spark>
      
      Closes #6609 from animeshbaranawal/SPARK-7980 and squashes the following commits:
      
      935899c [animesh] SPARK-7980:python+scala changes
      d053a31b
    • Reynold Xin's avatar
      [SPARK-8060] Improve DataFrame Python test coverage and documentation. · ce320cb2
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6601 from rxin/python-read-write-test-and-doc and squashes the following commits:
      
      baa8ad5 [Reynold Xin] Code review feedback.
      f081d47 [Reynold Xin] More documentation updates.
      c9902fa [Reynold Xin] [SPARK-8060] Improve DataFrame Python reader/writer interface doc and testing.
      ce320cb2
  27. May 31, 2015
  28. May 23, 2015
    • Davies Liu's avatar
      [SPARK-7322, SPARK-7836, SPARK-7822][SQL] DataFrame window function related updates · efe3bfdf
      Davies Liu authored
      1. ntile should take an integer as parameter.
      2. Added Python API (based on #6364)
      3. Update documentation of various DataFrame Python functions.
      
      Author: Davies Liu <davies@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #6374 from rxin/window-final and squashes the following commits:
      
      69004c7 [Reynold Xin] Style fix.
      288cea9 [Reynold Xin] Update documentaiton.
      7cb8985 [Reynold Xin] Merge pull request #6364 from davies/window
      66092b4 [Davies Liu] update docs
      ed73cb4 [Reynold Xin] [SPARK-7322][SQL] Improve DataFrame window function documentation.
      ef55132 [Davies Liu] Merge branch 'master' of github.com:apache/spark into window4
      8936ade [Davies Liu] fix maxint in python 3
      2649358 [Davies Liu] update docs
      778e2c0 [Davies Liu] SPARK-7836 and SPARK-7822: Python API of window functions
      efe3bfdf
  29. May 19, 2015
    • Davies Liu's avatar
      [SPARK-7738] [SQL] [PySpark] add reader and writer API in Python · 4de74d26
      Davies Liu authored
      cc rxin, please take a quick look, I'm working on tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6238 from davies/readwrite and squashes the following commits:
      
      c7200eb [Davies Liu] update tests
      9cbf01b [Davies Liu] Merge branch 'master' of github.com:apache/spark into readwrite
      f0c5a04 [Davies Liu] use sqlContext.read.load
      5f68bc8 [Davies Liu] update tests
      6437e9a [Davies Liu] Merge branch 'master' of github.com:apache/spark into readwrite
      bcc6668 [Davies Liu] add reader amd writer API in Python
      4de74d26
  30. May 18, 2015
    • Daoyuan Wang's avatar
      [SPARK-7150] SparkContext.range() and SQLContext.range() · c2437de1
      Daoyuan Wang authored
      This PR is based on #6081, thanks adrian-wang.
      
      Closes #6081
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6230 from davies/range and squashes the following commits:
      
      d3ce5fe [Davies Liu] add tests
      789eda5 [Davies Liu] add range() in Python
      4590208 [Davies Liu] Merge commit 'refs/pull/6081/head' of github.com:apache/spark into range
      cbf5200 [Daoyuan Wang] let's add python support in a separate PR
      f45e3b2 [Daoyuan Wang] remove redundant toLong
      617da76 [Daoyuan Wang] fix safe marge for corner cases
      867c417 [Daoyuan Wang] fix
      13dbe84 [Daoyuan Wang] update
      bd998ba [Daoyuan Wang] update comments
      d3a0c1b [Daoyuan Wang] add range api()
      c2437de1
  31. May 14, 2015
    • Michael Armbrust's avatar
      [SPARK-7548] [SQL] Add explode function for DataFrames · 6d0633e3
      Michael Armbrust authored
      Add an `explode` function for dataframes and modify the analyzer so that single table generating functions can be present in a select clause along with other expressions.   There are currently the following restrictions:
       - only top level TGFs are allowed (i.e. no `select(explode('list) + 1)`)
       - only one may be present in a single select to avoid potentially confusing implicit Cartesian products.
      
      TODO:
       - [ ] Python
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #6107 from marmbrus/explodeFunction and squashes the following commits:
      
      7ee2c87 [Michael Armbrust] whitespace
      6f80ba3 [Michael Armbrust] Update dataframe.py
      c176c89 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into explodeFunction
      81b5da3 [Michael Armbrust] style
      d3faa05 [Michael Armbrust] fix self join case
      f9e1e3e [Michael Armbrust] fix python, add since
      4f0d0a9 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into explodeFunction
      e710fe4 [Michael Armbrust] add java and python
      52ca0dc [Michael Armbrust] [SPARK-7548][SQL] Add explode function for dataframes.
      6d0633e3
  32. May 12, 2015
    • Daoyuan Wang's avatar
      [SPARK-6876] [PySpark] [SQL] add DataFrame na.replace in pyspark · d86ce845
      Daoyuan Wang authored
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #6003 from adrian-wang/pynareplace and squashes the following commits:
      
      672efba [Daoyuan Wang] remove py2.7 feature
      4a148f7 [Daoyuan Wang] to_replace support dict, value support single value, and add full tests
      9e232e7 [Daoyuan Wang] rename scala map
      af0268a [Daoyuan Wang] remove na
      63ac579 [Daoyuan Wang] add na.replace in pyspark
      d86ce845
  33. May 08, 2015
    • Wenchen Fan's avatar
      [SPARK-7133] [SQL] Implement struct, array, and map field accessor · 2d05f325
      Wenchen Fan authored
      It's the first step: generalize UnresolvedGetField to support all map, struct, and array
      TODO: add `apply` in Scala and `__getitem__` in Python, and unify the `getItem` and `getField` methods to one single API(or should we keep them for compatibility?).
      
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #5744 from cloud-fan/generalize and squashes the following commits:
      
      715c589 [Wenchen Fan] address comments
      7ea5b31 [Wenchen Fan] fix python test
      4f0833a [Wenchen Fan] add python test
      f515d69 [Wenchen Fan] add apply method and test cases
      8df6199 [Wenchen Fan] fix python test
      239730c [Wenchen Fan] fix test compile
      2a70526 [Wenchen Fan] use _bin_op in dataframe.py
      6bf72bc [Wenchen Fan] address comments
      3f880c3 [Wenchen Fan] add java doc
      ab35ab5 [Wenchen Fan] fix python test
      b5961a9 [Wenchen Fan] fix style
      c9d85f5 [Wenchen Fan] generalize UnresolvedGetField to support all map, struct, and array
      2d05f325
  34. May 07, 2015
    • Shiti's avatar
      [SPARK-7295][SQL] bitwise operations for DataFrame DSL · fa8fddff
      Shiti authored
      Author: Shiti <ssaxena.ece@gmail.com>
      
      Closes #5867 from Shiti/spark-7295 and squashes the following commits:
      
      71a9913 [Shiti] implementation for bitwise and,or, not and xor on Column with tests and docs
      fa8fddff
Loading