Skip to content
Snippets Groups Projects
  1. Sep 23, 2016
    • Holden Karau's avatar
      [SPARK-16861][PYSPARK][CORE] Refactor PySpark accumulator API on top of Accumulator V2 · 90d57542
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      Move the internals of the PySpark accumulator API from the old deprecated API on top of the new accumulator API.
      
      ## How was this patch tested?
      
      The existing PySpark accumulator tests (both unit tests and doc tests at the start of accumulator.py).
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #14467 from holdenk/SPARK-16861-refactor-pyspark-accumulator-api.
      Unverified
      90d57542
  2. Sep 22, 2016
  3. Sep 21, 2016
  4. Sep 20, 2016
    • Adrian Petrescu's avatar
      [SPARK-17437] Add uiWebUrl to JavaSparkContext and pyspark.SparkContext · 4a426ff8
      Adrian Petrescu authored
      ## What changes were proposed in this pull request?
      
      The Scala version of `SparkContext` has a handy field called `uiWebUrl` that tells you which URL the SparkUI spawned by that instance lives at. This is often very useful because the value for `spark.ui.port` in the config is only a suggestion; if that port number is taken by another Spark instance on the same machine, Spark will just keep incrementing the port until it finds a free one. So, on a machine with a lot of running PySpark instances, you often have to start trying all of them one-by-one until you find your application name.
      
      Scala users have a way around this with `uiWebUrl` but Java and Python users do not. This pull request fixes this in the most straightforward way possible, simply propagating this field through the `JavaSparkContext` and into pyspark through the Java gateway.
      
      Please let me know if any additional documentation/testing is needed.
      
      ## How was this patch tested?
      
      Existing tests were run to make sure there were no regressions, and a binary distribution was created and tested manually for the correct value of `sc.uiWebPort` in a variety of circumstances.
      
      Author: Adrian Petrescu <apetresc@gmail.com>
      
      Closes #15000 from apetresc/pyspark-uiweburl.
      Unverified
      4a426ff8
  5. Sep 19, 2016
    • Davies Liu's avatar
      [SPARK-17100] [SQL] fix Python udf in filter on top of outer join · d8104158
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      In optimizer, we try to evaluate the condition to see whether it's nullable or not, but some expressions are not evaluable, we should check that before evaluate it.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15103 from davies/udf_join.
      d8104158
  6. Sep 18, 2016
    • Liwei Lin's avatar
      [SPARK-16462][SPARK-16460][SPARK-15144][SQL] Make CSV cast null values properly · 1dbb725d
      Liwei Lin authored
      ## Problem
      
      CSV in Spark 2.0.0:
      -  does not read null values back correctly for certain data types such as `Boolean`, `TimestampType`, `DateType` -- this is a regression comparing to 1.6;
      - does not read empty values (specified by `options.nullValue`) as `null`s for `StringType` -- this is compatible with 1.6 but leads to problems like SPARK-16903.
      
      ## What changes were proposed in this pull request?
      
      This patch makes changes to read all empty values back as `null`s.
      
      ## How was this patch tested?
      
      New test cases.
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #14118 from lw-lin/csv-cast-null.
      Unverified
      1dbb725d
  7. Sep 17, 2016
    • William Benton's avatar
      [SPARK-17548][MLLIB] Word2VecModel.findSynonyms no longer spuriously rejects... · 25cbbe6c
      William Benton authored
      [SPARK-17548][MLLIB] Word2VecModel.findSynonyms no longer spuriously rejects the best match when invoked with a vector
      
      ## What changes were proposed in this pull request?
      
      This pull request changes the behavior of `Word2VecModel.findSynonyms` so that it will not spuriously reject the best match when invoked with a vector that does not correspond to a word in the model's vocabulary.  Instead of blindly discarding the best match, the changed implementation discards a match that corresponds to the query word (in cases where `findSynonyms` is invoked with a word) or that has an identical angle to the query vector.
      
      ## How was this patch tested?
      
      I added a test to `Word2VecSuite` to ensure that the word with the most similar vector from a supplied vector would not be spuriously rejected.
      
      Author: William Benton <willb@redhat.com>
      
      Closes #15105 from willb/fix/findSynonyms.
      Unverified
      25cbbe6c
  8. Sep 14, 2016
    • Eric Liang's avatar
      [SPARK-17472] [PYSPARK] Better error message for serialization failures of large objects in Python · dbfc7aa4
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      For large objects, pickle does not raise useful error messages. However, we can wrap them to be slightly more user friendly:
      
      Example 1:
      ```
      def run():
        import numpy.random as nr
        b = nr.bytes(8 * 1000000000)
        sc.parallelize(range(1000), 1000).map(lambda x: len(b)).count()
      
      run()
      ```
      
      Before:
      ```
      error: 'i' format requires -2147483648 <= number <= 2147483647
      ```
      
      After:
      ```
      pickle.PicklingError: Object too large to serialize: 'i' format requires -2147483648 <= number <= 2147483647
      ```
      
      Example 2:
      ```
      def run():
        import numpy.random as nr
        b = sc.broadcast(nr.bytes(8 * 1000000000))
        sc.parallelize(range(1000), 1000).map(lambda x: len(b.value)).count()
      
      run()
      ```
      
      Before:
      ```
      SystemError: error return without exception set
      ```
      
      After:
      ```
      cPickle.PicklingError: Could not serialize broadcast: SystemError: error return without exception set
      ```
      
      ## How was this patch tested?
      
      Manually tried out these cases
      
      cc davies
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #15026 from ericl/spark-17472.
      dbfc7aa4
    • Josh Rosen's avatar
      [SPARK-17514] df.take(1) and df.limit(1).collect() should perform the same in Python · 6d06ff6f
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      In PySpark, `df.take(1)` runs a single-stage job which computes only one partition of the DataFrame, while `df.limit(1).collect()` computes all partitions and runs a two-stage job. This difference in performance is confusing.
      
      The reason why `limit(1).collect()` is so much slower is that `collect()` internally maps to `df.rdd.<some-pyspark-conversions>.toLocalIterator`, which causes Spark SQL to build a query where a global limit appears in the middle of the plan; this, in turn, ends up being executed inefficiently because limits in the middle of plans are now implemented by repartitioning to a single task rather than by running a `take()` job on the driver (this was done in #7334, a patch which was a prerequisite to allowing partition-local limits to be pushed beneath unions, etc.).
      
      In order to fix this performance problem I think that we should generalize the fix from SPARK-10731 / #8876 so that `DataFrame.collect()` also delegates to the Scala implementation and shares the same performance properties. This patch modifies `DataFrame.collect()` to first collect all results to the driver and then pass them to Python, allowing this query to be planned using Spark's `CollectLimit` optimizations.
      
      ## How was this patch tested?
      
      Added a regression test in `sql/tests.py` which asserts that the expected number of jobs, stages, and tasks are run for both queries.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15068 from JoshRosen/pyspark-collect-limit.
      6d06ff6f
    • Sami Jaktholm's avatar
      [SPARK-17525][PYTHON] Remove SparkContext.clearFiles() from the PySpark API as... · b5bfcddb
      Sami Jaktholm authored
      [SPARK-17525][PYTHON] Remove SparkContext.clearFiles() from the PySpark API as it was removed from the Scala API prior to Spark 2.0.0
      
      ## What changes were proposed in this pull request?
      
      This pull request removes the SparkContext.clearFiles() method from the PySpark API as the method was removed from the Scala API in 8ce645d4. Using that method in PySpark leads to an exception as PySpark tries to call the non-existent method on the JVM side.
      
      ## How was this patch tested?
      
      Existing tests (though none of them tested this particular method).
      
      Author: Sami Jaktholm <sjakthol@outlook.com>
      
      Closes #15081 from sjakthol/pyspark-sc-clearfiles.
      b5bfcddb
  9. Sep 12, 2016
    • Davies Liu's avatar
      [SPARK-17474] [SQL] fix python udf in TakeOrderedAndProjectExec · a91ab705
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      When there is any Python UDF in the Project between Sort and Limit, it will be collected into TakeOrderedAndProjectExec, ExtractPythonUDFs failed to pull the Python UDFs out because QueryPlan.expressions does not include the expression inside Option[Seq[Expression]].
      
      Ideally, we should fix the `QueryPlan.expressions`, but tried with no luck (it always run into infinite loop). In PR, I changed the TakeOrderedAndProjectExec to no use Option[Seq[Expression]] to workaround it. cc JoshRosen
      
      ## How was this patch tested?
      
      Added regression test.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15030 from davies/all_expr.
      a91ab705
  10. Sep 11, 2016
  11. Sep 06, 2016
    • Yanbo Liang's avatar
      [MINOR][ML] Correct weights doc of MultilayerPerceptronClassificationModel. · 39d538dd
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      ```weights``` of ```MultilayerPerceptronClassificationModel``` should be the output weights of layers rather than initial weights, this PR correct it.
      
      ## How was this patch tested?
      Doc change.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #14967 from yanboliang/mlp-weights.
      39d538dd
  12. Sep 04, 2016
    • Sean Owen's avatar
      [SPARK-17311][MLLIB] Standardize Python-Java MLlib API to accept optional long seeds in all cases · cdeb97a8
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Related to https://github.com/apache/spark/pull/14524 -- just the 'fix' rather than a behavior change.
      
      - PythonMLlibAPI methods that take a seed now always take a `java.lang.Long` consistently, allowing the Python API to specify "no seed"
      - .mllib's Word2VecModel seemed to be an odd man out in .mllib in that it picked its own random seed. Instead it defaults to None, meaning, letting the Scala implementation pick a seed
      - BisectingKMeansModel arguably should not hard-code a seed for consistency with .mllib, I think. However I left it.
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14826 from srowen/SPARK-16832.2.
      cdeb97a8
  13. Sep 02, 2016
    • Srinath Shankar's avatar
      [SPARK-17298][SQL] Require explicit CROSS join for cartesian products · e6132a6c
      Srinath Shankar authored
      ## What changes were proposed in this pull request?
      
      Require the use of CROSS join syntax in SQL (and a new crossJoin
      DataFrame API) to specify explicit cartesian products between relations.
      By cartesian product we mean a join between relations R and S where
      there is no join condition involving columns from both R and S.
      
      If a cartesian product is detected in the absence of an explicit CROSS
      join, an error must be thrown. Turning on the
      "spark.sql.crossJoin.enabled" configuration flag will disable this check
      and allow cartesian products without an explicit CROSS join.
      
      The new crossJoin DataFrame API must be used to specify explicit cross
      joins. The existing join(DataFrame) method will produce a INNER join
      that will require a subsequent join condition.
      That is df1.join(df2) is equivalent to select * from df1, df2.
      
      ## How was this patch tested?
      
      Added cross-join.sql to the SQLQueryTestSuite to test the check for cartesian products. Added a couple of tests to the DataFrameJoinSuite to test the crossJoin API. Modified various other test suites to explicitly specify a cross join where an INNER join or a comma-separated list was previously used.
      
      Author: Srinath Shankar <srinath@databricks.com>
      
      Closes #14866 from srinathshankar/crossjoin.
      e6132a6c
    • Jeff Zhang's avatar
      [SPARK-17261] [PYSPARK] Using HiveContext after re-creating SparkContext in... · ea662286
      Jeff Zhang authored
      [SPARK-17261] [PYSPARK] Using HiveContext after re-creating SparkContext in Spark 2.0 throws "Java.lang.illegalStateException: Cannot call methods on a stopped sparkContext"
      
      ## What changes were proposed in this pull request?
      
      Set SparkSession._instantiatedContext as None so that we can recreate SparkSession again.
      
      ## How was this patch tested?
      
      Tested manually using the following command in pyspark shell
      ```
      spark.stop()
      spark = SparkSession.builder.enableHiveSupport().getOrCreate()
      spark.sql("show databases").show()
      ```
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #14857 from zjffdu/SPARK-17261.
      ea662286
  14. Aug 30, 2016
  15. Aug 27, 2016
    • Sean Owen's avatar
      [SPARK-17001][ML] Enable standardScaler to standardize sparse vectors when withMean=True · e07baf14
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Allow centering / mean scaling of sparse vectors in StandardScaler, if requested. This is for compatibility with `VectorAssembler` in common usages.
      
      ## How was this patch tested?
      
      Jenkins tests, including new caes to reflect the new behavior.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14663 from srowen/SPARK-17001.
      e07baf14
  16. Aug 25, 2016
  17. Aug 24, 2016
    • hyukjinkwon's avatar
      [SPARK-16216][SQL] Read/write timestamps and dates in ISO 8601 and... · 29952ed0
      hyukjinkwon authored
      [SPARK-16216][SQL] Read/write timestamps and dates in ISO 8601 and dateFormat/timestampFormat option for CSV and JSON
      
      ## What changes were proposed in this pull request?
      
      ### Default - ISO 8601
      
      Currently, CSV datasource is writing `Timestamp` and `Date` as numeric form and JSON datasource is writing both as below:
      
      - CSV
        ```
        // TimestampType
        1414459800000000
        // DateType
        16673
        ```
      
      - Json
      
        ```
        // TimestampType
        1970-01-01 11:46:40.0
        // DateType
        1970-01-01
        ```
      
      So, for CSV we can't read back what we write and for JSON it becomes ambiguous because the timezone is being missed.
      
      So, this PR make both **write** `Timestamp` and `Date` in ISO 8601 formatted string (please refer the [ISO 8601 specification](https://www.w3.org/TR/NOTE-datetime)).
      
      - For `Timestamp` it becomes as below: (`yyyy-MM-dd'T'HH:mm:ss.SSSZZ`)
      
        ```
        1970-01-01T02:00:01.000-01:00
        ```
      
      - For `Date` it becomes as below (`yyyy-MM-dd`)
      
        ```
        1970-01-01
        ```
      
      ### Custom date format option - `dateFormat`
      
      This PR also adds the support to write and read dates and timestamps in a formatted string as below:
      
      - **DateType**
      
        - With `dateFormat` option (e.g. `yyyy/MM/dd`)
      
          ```
          +----------+
          |      date|
          +----------+
          |2015/08/26|
          |2014/10/27|
          |2016/01/28|
          +----------+
          ```
      
      ### Custom date format option - `timestampFormat`
      
      - **TimestampType**
      
        - With `dateFormat` option (e.g. `dd/MM/yyyy HH:mm`)
      
          ```
          +----------------+
          |            date|
          +----------------+
          |2015/08/26 18:00|
          |2014/10/27 18:30|
          |2016/01/28 20:00|
          +----------------+
          ```
      
      ## How was this patch tested?
      
      Unit tests were added in `CSVSuite` and `JsonSuite`. For JSON, existing tests cover the default cases.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #14279 from HyukjinKwon/SPARK-16216-json-csv.
      29952ed0
    • Sean Owen's avatar
      [SPARK-16781][PYSPARK] java launched by PySpark as gateway may not be the same... · 0b3a4be9
      Sean Owen authored
      [SPARK-16781][PYSPARK] java launched by PySpark as gateway may not be the same java used in the spark environment
      
      ## What changes were proposed in this pull request?
      
      Update to py4j 0.10.3 to enable JAVA_HOME support
      
      ## How was this patch tested?
      
      Pyspark tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14748 from srowen/SPARK-16781.
      0b3a4be9
  18. Aug 22, 2016
    • Holden Karau's avatar
      [SPARK-15113][PYSPARK][ML] Add missing num features num classes · b264cbb1
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      Add missing `numFeatures` and `numClasses` to the wrapped Java models in PySpark ML pipelines. Also tag `DecisionTreeClassificationModel` as Expiremental to match Scala doc.
      
      ## How was this patch tested?
      
      Extended doctests
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #12889 from holdenk/SPARK-15113-add-missing-numFeatures-numClasses.
      b264cbb1
  19. Aug 20, 2016
    • Bryan Cutler's avatar
      [SPARK-15018][PYSPARK][ML] Improve handling of PySpark Pipeline when used without stages · 39f328ba
      Bryan Cutler authored
      ## What changes were proposed in this pull request?
      
      When fitting a PySpark Pipeline without the `stages` param set, a confusing NoneType error is raised as attempts to iterate over the pipeline stages.  A pipeline with no stages should act as an identity transform, however the `stages` param still needs to be set to an empty list.  This change improves the error output when the `stages` param is not set and adds a better description of what the API expects as input.  Also minor cleanup of related code.
      
      ## How was this patch tested?
      Added new unit tests to verify an empty Pipeline acts as an identity transformer
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #12790 from BryanCutler/pipeline-identity-SPARK-15018.
      39f328ba
  20. Aug 19, 2016
  21. Aug 17, 2016
  22. Aug 16, 2016
    • Dongjoon Hyun's avatar
      [SPARK-17035] [SQL] [PYSPARK] Improve Timestamp not to lose precision for all cases · 12a89e55
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      `PySpark` loses `microsecond` precision for some corner cases during converting `Timestamp` into `Long`. For example, for the following `datetime.max` value should be converted a value whose last 6 digits are '999999'. This PR improves the logic not to lose precision for all cases.
      
      **Corner case**
      ```python
      >>> datetime.datetime.max
      datetime.datetime(9999, 12, 31, 23, 59, 59, 999999)
      ```
      
      **Before**
      ```python
      >>> from datetime import datetime
      >>> from pyspark.sql import Row
      >>> from pyspark.sql.types import StructType, StructField, TimestampType
      >>> schema = StructType([StructField("dt", TimestampType(), False)])
      >>> [schema.toInternal(row) for row in [{"dt": datetime.max}]]
      [(253402329600000000,)]
      ```
      
      **After**
      ```python
      >>> [schema.toInternal(row) for row in [{"dt": datetime.max}]]
      [(253402329599999999,)]
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins test with a new test case.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14631 from dongjoon-hyun/SPARK-17035.
      12a89e55
  23. Aug 15, 2016
    • Davies Liu's avatar
      [SPARK-16700][PYSPARK][SQL] create DataFrame from dict/Row with schema · fffb0c0d
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      In 2.0, we verify the data type against schema for every row for safety, but with performance cost, this PR make it optional.
      
      When we verify the data type for StructType, it does not support all the types we support in infer schema (for example, dict), this PR fix that to make them consistent.
      
      For Row object which is created using named arguments, the order of fields are sorted by name, they may be not different than the order in provided schema, this PR fix that by ignore the order of fields in this case.
      
      ## How was this patch tested?
      
      Created regression tests for them.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #14469 from davies/py_dict.
      fffb0c0d
  24. Aug 12, 2016
    • Yanbo Liang's avatar
      [MINOR][ML] Rename TreeEnsembleModels to TreeEnsembleModel for PySpark · ccc6dc0f
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Fix the typo of ```TreeEnsembleModels``` for PySpark, it should ```TreeEnsembleModel``` which will be consistent with Scala. What's more, it represents a tree ensemble model, so  ```TreeEnsembleModel``` should be more reasonable. This should not be used public, so it will not involve  breaking change.
      
      ## How was this patch tested?
      No new tests, should pass existing ones.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #14454 from yanboliang/TreeEnsembleModel.
      ccc6dc0f
  25. Aug 10, 2016
  26. Aug 09, 2016
    • Mariusz Strzelecki's avatar
      [SPARK-16950] [PYSPARK] fromOffsets parameter support in KafkaUtils.createDirectStream for python3 · 29081b58
      Mariusz Strzelecki authored
      ## What changes were proposed in this pull request?
      
      Ability to use KafkaUtils.createDirectStream with starting offsets in python 3 by using java.lang.Number instead of Long during param mapping in scala helper. This allows py4j to pass Integer or Long to the map and resolves ClassCastException problems.
      
      ## How was this patch tested?
      
      unit tests
      
      jerryshao  - could you please look at this PR?
      
      Author: Mariusz Strzelecki <mariusz.strzelecki@allegrogroup.com>
      
      Closes #14540 from szczeles/kafka_pyspark.
      29081b58
  27. Aug 07, 2016
    • Sean Owen's avatar
      [SPARK-16409][SQL] regexp_extract with optional groups causes NPE · 8d872520
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      regexp_extract actually returns null when it shouldn't when a regex matches but the requested optional group did not. This makes it return an empty string, as apparently designed.
      
      ## How was this patch tested?
      
      Additional unit test
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14504 from srowen/SPARK-16409.
      8d872520
  28. Aug 05, 2016
  29. Aug 03, 2016
  30. Aug 02, 2016
    • Liang-Chi Hsieh's avatar
      [SPARK-16062] [SPARK-15989] [SQL] Fix two bugs of Python-only UDTs · 146001a9
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      There are two related bugs of Python-only UDTs. Because the test case of second one needs the first fix too. I put them into one PR. If it is not appropriate, please let me know.
      
      ### First bug: When MapObjects works on Python-only UDTs
      
      `RowEncoder` will use `PythonUserDefinedType.sqlType` for its deserializer expression. If the sql type is `ArrayType`, we will have `MapObjects` working on it. But `MapObjects` doesn't consider `PythonUserDefinedType` as its input data type. It causes error like:
      
          import pyspark.sql.group
          from pyspark.sql.tests import PythonOnlyPoint, PythonOnlyUDT
          from pyspark.sql.types import *
      
          schema = StructType().add("key", LongType()).add("val", PythonOnlyUDT())
          df = spark.createDataFrame([(i % 3, PythonOnlyPoint(float(i), float(i))) for i in range(10)], schema=schema)
          df.show()
      
          File "/home/spark/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", line 312, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o36.showString.
          : java.lang.RuntimeException: Error while decoding: scala.MatchError: org.apache.spark.sql.types.PythonUserDefinedTypef4ceede8 (of class org.apache.spark.sql.types.PythonUserDefinedType)
          ...
      
      ### Second bug: When Python-only UDTs is the element type of ArrayType
      
          import pyspark.sql.group
          from pyspark.sql.tests import PythonOnlyPoint, PythonOnlyUDT
          from pyspark.sql.types import *
      
          schema = StructType().add("key", LongType()).add("val", ArrayType(PythonOnlyUDT()))
          df = spark.createDataFrame([(i % 3, [PythonOnlyPoint(float(i), float(i))]) for i in range(10)], schema=schema)
          df.show()
      
      ## How was this patch tested?
      PySpark's sql tests.
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #13778 from viirya/fix-pyudt.
      146001a9
  31. Jul 29, 2016
Loading