Skip to content
Snippets Groups Projects
  1. Nov 10, 2015
    • Bryan Cutler's avatar
      [SPARK-10827][CORE] AppClient should not use `askWithReply` in `receiveAndReply` · a3989058
      Bryan Cutler authored
      Changed AppClient to be non-blocking in `receiveAndReply` by using a separate thread to wait for response and reply to the context.  The threads are managed by a thread pool.  Also added unit tests for the AppClient interface.
      
      Author: Bryan Cutler <bjcutler@us.ibm.com>
      
      Closes #9317 from BryanCutler/appClient-receiveAndReply-SPARK-10827.
      a3989058
    • Herman van Hovell's avatar
      [SPARK-9241][SQL] Supporting multiple DISTINCT columns - follow-up (3) · 21c562fa
      Herman van Hovell authored
      This PR is a 2nd follow-up for [SPARK-9241](https://issues.apache.org/jira/browse/SPARK-9241). It contains the following improvements:
      * Fix for a potential bug in distinct child expression and attribute alignment.
      * Improved handling of duplicate distinct child expressions.
      * Added test for distinct UDAF with multiple children.
      
      cc yhuai
      
      Author: Herman van Hovell <hvanhovell@questtec.nl>
      
      Closes #9566 from hvanhovell/SPARK-9241-followup-2.
      21c562fa
    • Yin Huai's avatar
      [SPARK-9830][SPARK-11641][SQL][FOLLOW-UP] Remove AggregateExpression1 and... · 3121e781
      Yin Huai authored
      [SPARK-9830][SPARK-11641][SQL][FOLLOW-UP] Remove AggregateExpression1 and update toString of Exchange
      
      https://issues.apache.org/jira/browse/SPARK-9830
      
      This is the follow-up pr for https://github.com/apache/spark/pull/9556 to address davies' comments.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #9607 from yhuai/removeAgg1-followup.
      3121e781
    • Joseph K. Bradley's avatar
      [SPARK-5565][ML] LDA wrapper for Pipelines API · e281b873
      Joseph K. Bradley authored
      This adds LDA to spark.ml, the Pipelines API.  It follows the design doc in the JIRA: [https://issues.apache.org/jira/browse/SPARK-5565], with one major change:
      * I eliminated doc IDs.  These are not necessary with DataFrames since the user can add an ID column as needed.
      
      Note: This will conflict with [https://github.com/apache/spark/pull/9484], but I'll try to merge [https://github.com/apache/spark/pull/9484] first and then rebase this PR.
      
      CC: hhbyyh feynmanliang  If you have a chance to make a pass, that'd be really helpful--thanks!  Now that I'm done traveling & this PR is almost ready, I'll see about reviewing other PRs critical for 1.6.
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #9513 from jkbradley/lda-pipelines.
      e281b873
    • Josh Rosen's avatar
      [SPARK-9818] Re-enable Docker tests for JDBC data source · 1dde39d7
      Josh Rosen authored
      This patch re-enables tests for the Docker JDBC data source. These tests were reverted in #4872 due to transitive dependency conflicts introduced by the `docker-client` library. This patch should avoid those problems by using a version of `docker-client` which shades its transitive dependencies and by performing some build-magic to work around problems with that shaded JAR.
      
      In addition, I significantly refactored the tests to simplify the setup and teardown code and to fix several Docker networking issues which caused problems when running in `boot2docker`.
      
      Closes #8101.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      Author: Yijie Shen <henry.yijieshen@gmail.com>
      
      Closes #9503 from JoshRosen/docker-jdbc-tests.
      1dde39d7
    • felixcheung's avatar
      [SPARK-11567] [PYTHON] Add Python API for corr Aggregate function · 32790fe7
      felixcheung authored
      like `df.agg(corr("col1", "col2")`
      
      davies
      
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #9536 from felixcheung/pyfunc.
      32790fe7
    • Pravin Gadakh's avatar
      [SPARK-11550][DOCS] Replace example code in mllib-optimization.md using include_example · 638c51d9
      Pravin Gadakh authored
      Author: Pravin Gadakh <pravingadakh177@gmail.com>
      
      Closes #9516 from pravingadakh/SPARK-11550.
      638c51d9
    • Michael Armbrust's avatar
      [SPARK-11616][SQL] Improve toString for Dataset · 724cf7a3
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #9586 from marmbrus/dataset-toString.
      724cf7a3
    • unknown's avatar
      [SPARK-7316][MLLIB] RDD sliding window with step · dba1a62c
      unknown authored
      Implementation of step capability for sliding window function in MLlib's RDD.
      
      Though one can use current sliding window with step 1 and then filter every Nth window, it will take more time and space (N*data.count times more than needed). For example, below are the results for various windows and steps on 10M data points:
      
      Window | Step | Time | Windows produced
      ------------ | ------------- | ---------- | ----------
      128 | 1 |  6.38 | 9999873
      128 | 10 | 0.9 | 999988
      128 | 100 | 0.41 | 99999
      1024 | 1 | 44.67 | 9998977
      1024 | 10 | 4.74 | 999898
      1024 | 100 | 0.78 | 99990
      ```
      import org.apache.spark.mllib.rdd.RDDFunctions._
      val rdd = sc.parallelize(1 to 10000000, 10)
      rdd.count
      val window = 1024
      val step = 1
      val t = System.nanoTime(); val windows = rdd.sliding(window, step); println(windows.count); println((System.nanoTime() - t) / 1e9)
      ```
      
      Author: unknown <ulanov@ULANOV3.americas.hpqcorp.net>
      Author: Alexander Ulanov <nashb@yandex.ru>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #5855 from avulanov/SPARK-7316-sliding.
      dba1a62c
    • Joseph K. Bradley's avatar
      [SPARK-11618][ML] Minor refactoring of basic ML import/export · 18350a57
      Joseph K. Bradley authored
      Refactoring
      * separated overwrite and param save logic in DefaultParamsWriter
      * added sparkVersion to DefaultParamsWriter
      
      CC: mengxr
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #9587 from jkbradley/logreg-io.
      18350a57
    • Yanbo Liang's avatar
      [ML][R] SparkR::glm summary result to compare with native R · f14e9511
      Yanbo Liang authored
      Follow up #9561. Due to [SPARK-11587](https://issues.apache.org/jira/browse/SPARK-11587) has been fixed, we should compare SparkR::glm summary result with native R output rather than hard-code one. mengxr
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #9590 from yanboliang/glm-r-test.
      f14e9511
    • Nong Li's avatar
      [SPARK-10371][SQL] Implement subexpr elimination for UnsafeProjections · 87aedc48
      Nong Li authored
      This patch adds the building blocks for codegening subexpr elimination and implements
      it end to end for UnsafeProjection. The building blocks can be used to do the same thing
      for other operators.
      
      It introduces some utilities to compute common sub expressions. Expressions can be added to
      this data structure. The expr and its children will be recursively matched against existing
      expressions (ones previously added) and grouped into common groups. This is built using
      the existing `semanticEquals`. It does not understand things like commutative or associative
      expressions. This can be done as future work.
      
      After building this data structure, the codegen process takes advantage of it by:
        1. Generating a helper function in the generated class that computes the common
           subexpression. This is done for all common subexpressions that have at least
           two occurrences and the expression tree is sufficiently complex.
        2. When generating the apply() function, if the helper function exists, call that
           instead of regenerating the expression tree. Repeated calls to the helper function
           shortcircuit the evaluation logic.
      
      Author: Nong Li <nong@databricks.com>
      Author: Nong Li <nongli@gmail.com>
      
      This patch had conflicts when merged, resolved by
      Committer: Michael Armbrust <michael@databricks.com>
      
      Closes #9480 from nongli/spark-10371.
      87aedc48
    • Wenchen Fan's avatar
      [SPARK-11590][SQL] use native json_tuple in lateral view · 53600854
      Wenchen Fan authored
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #9562 from cloud-fan/json-tuple.
      53600854
    • Wenchen Fan's avatar
      [SPARK-11578][SQL][FOLLOW-UP] complete the user facing api for typed aggregation · dfcfcbcc
      Wenchen Fan authored
      Currently the user facing api for typed aggregation has some limitations:
      
      * the customized typed aggregation must be the first of aggregation list
      * the customized typed aggregation can only use long as buffer type
      * the customized typed aggregation can only use flat type as result type
      
      This PR tries to remove these limitations.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #9599 from cloud-fan/agg.
      dfcfcbcc
    • Oscar D. Lara Yejas's avatar
      [SPARK-10863][SPARKR] Method coltypes() (New version) · 47735cdc
      Oscar D. Lara Yejas authored
      This is a follow up on PR #8984, as the corresponding branch for such PR was damaged.
      
      Author: Oscar D. Lara Yejas <olarayej@mail.usf.edu>
      
      Closes #9579 from olarayej/SPARK-10863_NEW14.
      47735cdc
    • Yin Huai's avatar
      [SPARK-9830][SQL] Remove AggregateExpression1 and Aggregate Operator used to... · e0701c75
      Yin Huai authored
      [SPARK-9830][SQL] Remove AggregateExpression1 and Aggregate Operator used to evaluate AggregateExpression1s
      
      https://issues.apache.org/jira/browse/SPARK-9830
      
      This PR contains the following main changes.
      * Removing `AggregateExpression1`.
      * Removing `Aggregate` operator, which is used to evaluate `AggregateExpression1`.
      * Removing planner rule used to plan `Aggregate`.
      * Linking `MultipleDistinctRewriter` to analyzer.
      * Renaming `AggregateExpression2` to `AggregateExpression` and `AggregateFunction2` to `AggregateFunction`.
      * Updating places where we create aggregate expression. The way to create aggregate expressions is `AggregateExpression(aggregateFunction, mode, isDistinct)`.
      * Changing `val`s in `DeclarativeAggregate`s that touch children of this function to `lazy val`s (when we create aggregate expression in DataFrame API, children of an aggregate function can be unresolved).
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #9556 from yhuai/removeAgg1.
      e0701c75
    • Lianhui Wang's avatar
      [SPARK-11252][NETWORK] ShuffleClient should release connection after fetching... · 6e5fc378
      Lianhui Wang authored
      [SPARK-11252][NETWORK] ShuffleClient should release connection after fetching blocks had been completed for external shuffle
      
      with yarn's external shuffle, ExternalShuffleClient of executors reserve its connections for yarn's NodeManager until application has been completed. so it will make NodeManager and executors have many socket connections.
      in order to reduce network pressure of NodeManager's shuffleService, after registerWithShuffleServer or fetchBlocks have been completed in ExternalShuffleClient, connection for NM's shuffleService needs to be closed.andrewor14 rxin vanzin
      
      Author: Lianhui Wang <lianhuiwang09@gmail.com>
      
      Closes #9227 from lianhuiwang/spark-11252.
      6e5fc378
    • Josh Rosen's avatar
      [SPARK-7841][BUILD] Stop using retrieveManaged to retrieve dependencies in SBT · 689386b1
      Josh Rosen authored
      This patch modifies Spark's SBT build so that it no longer uses `retrieveManaged` / `lib_managed` to store its dependencies. The motivations for this change are nicely described on the JIRA ticket ([SPARK-7841](https://issues.apache.org/jira/browse/SPARK-7841)); my personal interest in doing this stems from the fact that `lib_managed` has caused me some pain while debugging dependency issues in another PR of mine.
      
      Removing our use of `lib_managed` would be trivial except for one snag: the Datanucleus JARs, required by Spark SQL's Hive integration, cannot be included in assembly JARs due to problems with merging OSGI `plugin.xml` files. As a result, several places in the packaging and deployment pipeline assume that these Datanucleus JARs are copied to `lib_managed/jars`. In the interest of maintaining compatibility, I have chosen to retain the `lib_managed/jars` directory _only_ for these Datanucleus JARs and have added custom code to `SparkBuild.scala` to automatically copy those JARs to that folder as part of the `assembly` task.
      
      `dev/mima` also depended on `lib_managed` in a hacky way in order to set classpaths when generating MiMa excludes; I've updated this to obtain the classpaths directly from SBT instead.
      
      /cc dragos marmbrus pwendell srowen
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #9575 from JoshRosen/SPARK-7841.
      689386b1
    • Xusen Yin's avatar
      [SPARK-11382] Replace example code in mllib-decision-tree.md using include_example · a81f47ff
      Xusen Yin authored
      https://issues.apache.org/jira/browse/SPARK-11382
      
      B.T.W. I fix an error in naive_bayes_example.py.
      
      Author: Xusen Yin <yinxusen@gmail.com>
      
      Closes #9596 from yinxusen/SPARK-11382.
      a81f47ff
    • Paul Chandler's avatar
      Fix typo in driver page · 5507a9d0
      Paul Chandler authored
      "Comamnd property" => "Command property"
      
      Author: Paul Chandler <pestilence669@users.noreply.github.com>
      
      Closes #9578 from pestilence669/fix_spelling.
      5507a9d0
    • Davies Liu's avatar
      [SPARK-11598] [SQL] enable tests for ShuffledHashOuterJoin · 521b3cae
      Davies Liu authored
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9573 from davies/join_condition.
      521b3cae
    • Davies Liu's avatar
      [SPARK-11599] [SQL] fix NPE when resolve Hive UDF in SQLParser · d6cd3a18
      Davies Liu authored
      The DataFrame APIs that takes a SQL expression always use SQLParser, then the HiveFunctionRegistry will called outside of Hive state, cause NPE if there is not a active Session State for current thread (in PySpark).
      
      cc rxin yhuai
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9576 from davies/hive_udf.
      d6cd3a18
  2. Nov 09, 2015
    • Shivaram Venkataraman's avatar
      [SPARK-11587][SPARKR] Fix the summary generic to match base R · c4e19b38
      Shivaram Venkataraman authored
      The signature is summary(object, ...) as defined in
      https://stat.ethz.ch/R-manual/R-devel/library/base/html/summary.html
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #9582 from shivaram/summary-fix.
      c4e19b38
    • Burak Yavuz's avatar
      Add mockito as an explicit test dependency to spark-streaming · 1431319e
      Burak Yavuz authored
      While sbt successfully compiles as it properly pulls the mockito dependency, maven builds have broken. We need this in ASAP.
      tdas
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #9584 from brkyvz/fix-master.
      1431319e
    • Shixiong Zhu's avatar
      [SPARK-11333][STREAMING] Add executorId to ReceiverInfo and display it in UI · 6502944f
      Shixiong Zhu authored
      Expose executorId to `ReceiverInfo` and UI since it's helpful when there are multiple executors running in the same host. Screenshot:
      
      <img width="1058" alt="screen shot 2015-11-02 at 10 52 19 am" src="https://cloud.githubusercontent.com/assets/1000778/10890968/2e2f5512-8150-11e5-8d9d-746e826b69e8.png">
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      Author: zsxwing <zsxwing@gmail.com>
      
      Closes #9418 from zsxwing/SPARK-11333.
      6502944f
    • zsxwing's avatar
      [SPARK-11462][STREAMING] Add JavaStreamingListener · 1f0f14ef
      zsxwing authored
      Currently, StreamingListener is not Java friendly because it exposes some Scala collections to Java users directly, such as Option, Map.
      
      This PR added a Java version of StreamingListener and a bunch of Java friendly classes for Java users.
      
      Author: zsxwing <zsxwing@gmail.com>
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #9420 from zsxwing/java-streaming-listener.
      1f0f14ef
    • Burak Yavuz's avatar
      [SPARK-11141][STREAMING] Batch ReceivedBlockTrackerLogEvents for WAL writes · 0ce6f9b2
      Burak Yavuz authored
      When using S3 as a directory for WALs, the writes take too long. The driver gets very easily bottlenecked when multiple receivers send AddBlock events to the ReceiverTracker. This PR adds batching of events in the ReceivedBlockTracker so that receivers don't get blocked by the driver for too long.
      
      cc zsxwing tdas
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #9143 from brkyvz/batch-wal-writes.
      0ce6f9b2
    • Burak Yavuz's avatar
      [SPARK-11198][STREAMING][KINESIS] Support de-aggregation of records during recovery · 26062d22
      Burak Yavuz authored
      While the KCL handles de-aggregation during the regular operation, during recovery we use the lower level api, and therefore need to de-aggregate the records.
      
      tdas Testing is an issue, we need protobuf magic to do the aggregated records. Maybe we could depend on KPL for tests?
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #9403 from brkyvz/kinesis-deaggregation.
      26062d22
    • Yuhao Yang's avatar
      [SPARK-11069][ML] Add RegexTokenizer option to convert to lowercase · 61f9c871
      Yuhao Yang authored
      jira: https://issues.apache.org/jira/browse/SPARK-11069
      quotes from jira:
      Tokenizer converts strings to lowercase automatically, but RegexTokenizer does not. It would be nice to add an option to RegexTokenizer to convert to lowercase. Proposal:
      call the Boolean Param "toLowercase"
      set default to false (so behavior does not change)
      
      Actually sklearn converts to lowercase before tokenizing too
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #9092 from hhbyyh/tokenLower.
      61f9c871
    • Yu ISHIKAWA's avatar
      [SPARK-11610][MLLIB][PYTHON][DOCS] Make the docs of LDAModel.describeTopics in Python more specific · 7dc9d8db
      Yu ISHIKAWA authored
      cc jkbradley
      
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      
      Closes #9577 from yu-iskw/SPARK-11610.
      7dc9d8db
    • Reynold Xin's avatar
      [SPARK-11564][SQL] Fix documentation for DataFrame.take/collect · 675c7e72
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9557 from rxin/SPARK-11564-1.
      675c7e72
    • Michael Armbrust's avatar
      [SPARK-11578][SQL] User API for Typed Aggregation · 9c740a9d
      Michael Armbrust authored
      This PR adds a new interface for user-defined aggregations, that can be used in `DataFrame` and `Dataset` operations to take all of the elements of a group and reduce them to a single value.
      
      For example, the following aggregator extracts an `int` from a specific class and adds them up:
      
      ```scala
        case class Data(i: Int)
      
        val customSummer =  new Aggregator[Data, Int, Int] {
          def prepare(d: Data) = d.i
          def reduce(l: Int, r: Int) = l + r
          def present(r: Int) = r
        }.toColumn()
      
        val ds: Dataset[Data] = ...
        val aggregated = ds.select(customSummer)
      ```
      
      By using helper functions, users can make a generic `Aggregator` that works on any input type:
      
      ```scala
      /** An `Aggregator` that adds up any numeric type returned by the given function. */
      class SumOf[I, N : Numeric](f: I => N) extends Aggregator[I, N, N] with Serializable {
        val numeric = implicitly[Numeric[N]]
        override def zero: N = numeric.zero
        override def reduce(b: N, a: I): N = numeric.plus(b, f(a))
        override def present(reduction: N): N = reduction
      }
      
      def sum[I, N : Numeric : Encoder](f: I => N): TypedColumn[I, N] = new SumOf(f).toColumn
      ```
      
      These aggregators can then be used alongside other built-in SQL aggregations.
      
      ```scala
      val ds = Seq(("a", 10), ("a", 20), ("b", 1), ("b", 2), ("c", 1)).toDS()
      ds
        .groupBy(_._1)
        .agg(
          sum(_._2),                // The aggregator defined above.
          expr("sum(_2)").as[Int],  // A built-in dynatically typed aggregation.
          count("*"))               // A built-in statically typed aggregation.
        .collect()
      
      res0: ("a", 30, 30, 2L), ("b", 3, 3, 2L), ("c", 1, 1, 1L)
      ```
      
      The current implementation focuses on integrating this into the typed API, but currently only supports running aggregations that return a single long value as explained in `TypedAggregateExpression`.  This will be improved in a followup PR.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #9555 from marmbrus/dataset-useragg.
      9c740a9d
    • gatorsmile's avatar
      [SPARK-11360][DOC] Loss of nullability when writing parquet files · 2f383788
      gatorsmile authored
      This fix is to add one line to explain the current behavior of Spark SQL when writing Parquet files. All columns are forced to be nullable for compatibility reasons.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #9314 from gatorsmile/lossNull.
      2f383788
    • hyukjinkwon's avatar
      [SPARK-9557][SQL] Refactor ParquetFilterSuite and remove old ParquetFilters code · 9565c246
      hyukjinkwon authored
      Actually this was resolved by https://github.com/apache/spark/pull/8275.
      
      But I found the JIRA issue for this is not marked as resolved since the PR above was made for another issue but the PR above resolved both.
      
      I commented that this is resolved by the PR above; however, I opened this PR as I would like to just add
      a little bit of corrections.
      
      In the previous PR, I refactored the test by not reducing just collecting filters; however, this would not test  properly `And` filter (which is not given to the tests). I unintentionally changed this from the original way (before being refactored).
      
      In this PR, I just followed the original way to collect filters by reducing.
      
      I would like to close this if this PR is inappropriate and somebody would like this deal with it in the separate PR related with this.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #9554 from HyukjinKwon/SPARK-9557.
      9565c246
    • Wenchen Fan's avatar
      [SPARK-11564][SQL][FOLLOW-UP] improve java api for GroupedDataset · fcb57e9c
      Wenchen Fan authored
      created `MapGroupFunction`, `FlatMapGroupFunction`, `CoGroupFunction`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #9564 from cloud-fan/map.
      fcb57e9c
    • Yu ISHIKAWA's avatar
      [SPARK-6517][MLLIB] Implement the Algorithm of Hierarchical Clustering · 8a233689
      Yu ISHIKAWA authored
      I implemented a hierarchical clustering algorithm again.  This PR doesn't include examples, documentation and spark.ml APIs. I am going to send another PRs later.
      https://issues.apache.org/jira/browse/SPARK-6517
      
      - This implementation based on a bi-sectiong K-means clustering.
          - It derives from the freeman-lab 's implementation
      - The basic idea is not changed from the previous version. (#2906)
          - However, It is 1000x faster than the previous version through parallel processing.
      
      Thank you for your great cooperation, RJ Nowling(rnowling), Jeremy Freeman(freeman-lab), Xiangrui Meng(mengxr) and Sean Owen(srowen).
      
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      Author: Xiangrui Meng <meng@databricks.com>
      Author: Yu ISHIKAWA <yu-iskw@users.noreply.github.com>
      
      Closes #5267 from yu-iskw/new-hierarchical-clustering.
      8a233689
    • Burak Yavuz's avatar
      [SPARK-11359][STREAMING][KINESIS] Checkpoint to DynamoDB even when new data doesn't come in · a3a7c910
      Burak Yavuz authored
      Currently, the checkpoints to DynamoDB occur only when new data comes in, as we update the clock for the checkpointState. This PR makes the checkpoint a scheduled execution based on the `checkpointInterval`.
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #9421 from brkyvz/kinesis-checkpoint.
      a3a7c910
    • Cheng Lian's avatar
      [SPARK-11595] [SQL] Fixes ADD JAR when the input path contains URL scheme · 150f6a89
      Cheng Lian authored
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #9569 from liancheng/spark-11595.fix-add-jar.
      150f6a89
    • Nick Buroojy's avatar
      [SPARK-9301][SQL] Add collect_set and collect_list aggregate functions · f138cb87
      Nick Buroojy authored
      For now they are thin wrappers around the corresponding Hive UDAFs.
      
      One limitation with these in Hive 0.13.0 is they only support aggregating primitive types.
      
      I chose snake_case here instead of camelCase because it seems to be used in the majority of the multi-word fns.
      
      Do we also want to add these to `functions.py`?
      
      This approach was recommended here: https://github.com/apache/spark/pull/8592#issuecomment-154247089
      
      
      
      marmbrus rxin
      
      Author: Nick Buroojy <nick.buroojy@civitaslearning.com>
      
      Closes #9526 from nburoojy/nick/udaf-alias.
      
      (cherry picked from commit a6ee4f98)
      Signed-off-by: default avatarMichael Armbrust <michael@databricks.com>
      f138cb87
    • Rishabh Bhardwaj's avatar
      [SPARK-11548][DOCS] Replaced example code in mllib-collaborative-filtering.md using include_example · b7720fa4
      Rishabh Bhardwaj authored
      Kindly review the changes.
      
      Author: Rishabh Bhardwaj <rbnext29@gmail.com>
      
      Closes #9519 from rishabhbhardwaj/SPARK-11337.
      b7720fa4
Loading