Skip to content
Snippets Groups Projects
  1. May 22, 2017
    • Ignacio Bermudez's avatar
      [SPARK-20687][MLLIB] mllib.Matrices.fromBreeze may crash when converting from Breeze sparse matrix · c3a986b1
      Ignacio Bermudez authored
      ## What changes were proposed in this pull request?
      
      When two Breeze SparseMatrices are operated, the result matrix may contain provisional 0 values extra in rowIndices and data arrays. This causes an incoherence with the colPtrs data, but Breeze get away with this incoherence by keeping a counter of the valid data.
      
      In spark, when this matrices are converted to SparseMatrices, Sparks relies solely on rowIndices, data, and colPtrs, but these might be incorrect because of breeze internal hacks. Therefore, we need to slice both rowIndices and data, using their counter of active data
      
      This method is at least called by BlockMatrix when performing distributed block operations, causing exceptions on valid operations.
      
      See http://stackoverflow.com/questions/33528555/error-thrown-when-using-blockmatrix-add
      
      ## How was this patch tested?
      
      Added a test to MatricesSuite that verifies that the conversions are valid and that code doesn't crash. Originally the same code would crash on Spark.
      
      Bugfix for https://issues.apache.org/jira/browse/SPARK-20687
      
      
      
      Author: Ignacio Bermudez <ignaciobermudez@gmail.com>
      Author: Ignacio Bermudez Corrales <icorrales@splunk.com>
      
      Closes #17940 from ghoto/bug-fix/SPARK-20687.
      
      (cherry picked from commit 06dda1d5)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      c3a986b1
  2. May 19, 2017
  3. May 18, 2017
  4. May 17, 2017
    • Andrew Ray's avatar
      [SPARK-20769][DOC] Incorrect documentation for using Jupyter notebook · ba35c6bd
      Andrew Ray authored
      
      ## What changes were proposed in this pull request?
      
      SPARK-13973 incorrectly removed the required PYSPARK_DRIVER_PYTHON_OPTS=notebook from documentation to use pyspark with Jupyter notebook. This patch corrects the documentation error.
      
      ## How was this patch tested?
      
      Tested invocation locally with
      ```bash
      PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS=notebook ./bin/pyspark
      ```
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #18001 from aray/patch-1.
      
      (cherry picked from commit 19954176)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      ba35c6bd
  5. May 15, 2017
  6. May 12, 2017
    • Ryan Blue's avatar
      [SPARK-17424] Fix unsound substitution bug in ScalaReflection. · 95de4672
      Ryan Blue authored
      
      ## What changes were proposed in this pull request?
      
      This method gets a type's primary constructor and fills in type parameters with concrete types. For example, `MapPartitions[T, U] -> MapPartitions[Int, String]`. This Substitution fails when the actual type args are empty because they are still unknown. Instead, when there are no resolved types to subsitute, this returns the original args with unresolved type parameters.
      ## How was this patch tested?
      
      This doesn't affect substitutions where the type args are determined. This fixes our case where the actual type args are empty and our job runs successfully.
      
      Author: Ryan Blue <blue@apache.org>
      
      Closes #15062 from rdblue/SPARK-17424-fix-unsound-reflect-substitution.
      
      (cherry picked from commit b2369339)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      95de4672
  7. May 11, 2017
    • liuxian's avatar
      [SPARK-20665][SQL] Bround" and "Round" function return NULL · 6e89d574
      liuxian authored
      
         spark-sql>select bround(12.3, 2);
         spark-sql>NULL
      For this case,  the expected result is 12.3, but it is null.
      So ,when the second parameter is bigger than "decimal.scala", the result is not we expected.
      "round" function  has the same problem. This PR can solve the problem for both of them.
      
      unit test cases in MathExpressionsSuite and MathFunctionsSuite
      
      Author: liuxian <liu.xian3@zte.com.cn>
      
      Closes #17906 from 10110346/wip_lx_0509.
      
      (cherry picked from commit 2b36eb69)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      6e89d574
  8. May 10, 2017
    • Josh Rosen's avatar
      [SPARK-20685] Fix BatchPythonEvaluation bug in case of single UDF w/ repeated arg. · 92a71a66
      Josh Rosen authored
      
      ## What changes were proposed in this pull request?
      
      There's a latent corner-case bug in PySpark UDF evaluation where executing a `BatchPythonEvaluation` with a single multi-argument UDF where _at least one argument value is repeated_ will crash at execution with a confusing error.
      
      This problem was introduced in #12057: the code there has a fast path for handling a "batch UDF evaluation consisting of a single Python UDF", but that branch incorrectly assumes that a single UDF won't have repeated arguments and therefore skips the code for unpacking arguments from the input row (whose schema may not necessarily match the UDF inputs due to de-duplication of repeated arguments which occurred in the JVM before sending UDF inputs to Python).
      
      This fix here is simply to remove this special-casing: it turns out that the code in the "multiple UDFs" branch just so happens to work for the single-UDF case because Python treats `(x)` as equivalent to `x`, not as a single-argument tuple.
      
      ## How was this patch tested?
      
      New regression test in `pyspark.python.sql.tests` module (tested and confirmed that it fails before my fix).
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #17927 from JoshRosen/SPARK-20685.
      
      (cherry picked from commit 8ddbc431)
      Signed-off-by: default avatarXiao Li <gatorsmile@gmail.com>
      92a71a66
    • Wenchen Fan's avatar
      [SPARK-20688][SQL] correctly check analysis for scalar sub-queries · bdc08ab6
      Wenchen Fan authored
      
      In `CheckAnalysis`, we should call `checkAnalysis` for `ScalarSubquery` at the beginning, as later we will call `plan.output` which is invalid if `plan` is not resolved.
      
      new regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17930 from cloud-fan/tmp.
      
      (cherry picked from commit 789bdbe3)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      bdc08ab6
    • zero323's avatar
      [SPARK-20631][PYTHON][ML] LogisticRegression._checkThresholdConsistency should... · 69786ea3
      zero323 authored
      [SPARK-20631][PYTHON][ML] LogisticRegression._checkThresholdConsistency should use values not Params
      
      ## What changes were proposed in this pull request?
      
      - Replace `getParam` calls with `getOrDefault` calls.
      - Fix exception message to avoid unintended `TypeError`.
      - Add unit tests
      
      ## How was this patch tested?
      
      New unit tests.
      
      Author: zero323 <zero323@users.noreply.github.com>
      
      Closes #17891 from zero323/SPARK-20631.
      
      (cherry picked from commit 804949c6)
      Signed-off-by: default avatarYanbo Liang <ybliang8@gmail.com>
      69786ea3
    • Josh Rosen's avatar
      [SPARK-20686][SQL] PropagateEmptyRelation incorrectly handles aggregate without grouping · 8e097890
      Josh Rosen authored
      
      The query
      
      ```
      SELECT 1 FROM (SELECT COUNT(*) WHERE FALSE) t1
      ```
      
      should return a single row of output because the subquery is an aggregate without a group-by and thus should return a single row. However, Spark incorrectly returns zero rows.
      
      This is caused by SPARK-16208 / #13906, a patch which added an optimizer rule to propagate EmptyRelation through operators. The logic for handling aggregates is wrong: it checks whether aggregate expressions are non-empty for deciding whether the output should be empty, whereas it should be checking grouping expressions instead:
      
      An aggregate with non-empty grouping expression will return one output row per group. If the input to the grouped aggregate is empty then all groups will be empty and thus the output will be empty. It doesn't matter whether the aggregation output columns include aggregate expressions since that won't affect the number of output rows.
      
      If the grouping expressions are empty, however, then the aggregate will always produce a single output row and thus we cannot propagate the EmptyRelation.
      
      The current implementation is incorrect and also misses an optimization opportunity by not propagating EmptyRelation in the case where a grouped aggregate has aggregate expressions (in other words, `SELECT COUNT(*) from emptyRelation GROUP BY x` would _not_ be optimized to `EmptyRelation` in the old code, even though it safely could be).
      
      This patch resolves this issue by modifying `PropagateEmptyRelation` to consider only the presence/absence of grouping expressions, not the aggregate functions themselves, when deciding whether to propagate EmptyRelation.
      
      - Added end-to-end regression tests in `SQLQueryTest`'s `group-by.sql` file.
      - Updated unit tests in `PropagateEmptyRelationSuite`.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #17929 from JoshRosen/fix-PropagateEmptyRelation.
      
      (cherry picked from commit a90c5cd8)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      8e097890
  9. May 09, 2017
    • Yuming Wang's avatar
      [SPARK-17685][SQL] Make SortMergeJoinExec's currentVars is null when calling createJoinKey · 50f28dfe
      Yuming Wang authored
      
      ## What changes were proposed in this pull request?
      
      The following SQL query cause `IndexOutOfBoundsException` issue when `LIMIT > 1310720`:
      ```sql
      CREATE TABLE tab1(int int, int2 int, str string);
      CREATE TABLE tab2(int int, int2 int, str string);
      INSERT INTO tab1 values(1,1,'str');
      INSERT INTO tab1 values(2,2,'str');
      INSERT INTO tab2 values(1,1,'str');
      INSERT INTO tab2 values(2,3,'str');
      
      SELECT
        count(*)
      FROM
        (
          SELECT t1.int, t2.int2
          FROM (SELECT * FROM tab1 LIMIT 1310721) t1
          INNER JOIN (SELECT * FROM tab2 LIMIT 1310721) t2
          ON (t1.int = t2.int AND t1.int2 = t2.int2)
        ) t;
      ```
      
      This pull request fix this issue.
      
      ## How was this patch tested?
      
      unit tests
      
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #17920 from wangyum/SPARK-17685.
      
      (cherry picked from commit 771abeb4)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      50f28dfe
    • Holden Karau's avatar
      [SPARK-20627][PYSPARK] Drop the hadoop distirbution name from the Python version · 12c937ed
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      Drop the hadoop distirbution name from the Python version (PEP440 - https://www.python.org/dev/peps/pep-0440/
      
      ). We've been using the local version string to disambiguate between different hadoop versions packaged with PySpark, but PEP0440 states that local versions should not be used when publishing up-stream. Since we no longer make PySpark pip packages for different hadoop versions, we can simply drop the hadoop information. If at a later point we need to start publishing different hadoop versions we can look at make different packages or similar.
      
      ## How was this patch tested?
      
      Ran `make-distribution` locally
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #17885 from holdenk/SPARK-20627-remove-pip-local-version-string.
      
      (cherry picked from commit 1b85bcd9)
      Signed-off-by: default avatarHolden Karau <holden@us.ibm.com>
      12c937ed
    • Jon McLean's avatar
      [SPARK-20615][ML][TEST] SparseVector.argmax throws IndexOutOfBoundsException · f7a91a17
      Jon McLean authored
      
      ## What changes were proposed in this pull request?
      
      Added a check for for the number of defined values.  Previously the argmax function assumed that at least one value was defined if the vector size was greater than zero.
      
      ## How was this patch tested?
      
      Tests were added to the existing VectorsSuite to cover this case.
      
      Author: Jon McLean <jon.mclean@atsid.com>
      
      Closes #17877 from jonmclean/vectorArgmaxIndexBug.
      
      (cherry picked from commit be53a783)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      f7a91a17
  10. May 05, 2017
  11. May 02, 2017
    • Wenchen Fan's avatar
      [SPARK-20558][CORE] clear InheritableThreadLocal variables in SparkContext when stopping it · d10b0f65
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      To better understand this problem, let's take a look at an example first:
      ```
      object Main {
        def main(args: Array[String]): Unit = {
          var t = new Test
          new Thread(new Runnable {
            override def run() = {}
          }).start()
          println("first thread finished")
      
          t.a = null
          t = new Test
          new Thread(new Runnable {
            override def run() = {}
          }).start()
        }
      
      }
      
      class Test {
        var a = new InheritableThreadLocal[String] {
          override protected def childValue(parent: String): String = {
            println("parent value is: " + parent)
            parent
          }
        }
        a.set("hello")
      }
      ```
      The result is:
      ```
      parent value is: hello
      first thread finished
      parent value is: hello
      parent value is: hello
      ```
      
      Once an `InheritableThreadLocal` has been set value, child threads will inherit its value as long as it has not been GCed, so setting the variable which holds the `InheritableThreadLocal` to `null` doesn't work as we expected.
      
      In `SparkContext`, we have an `InheritableThreadLocal` for local properties, we should clear it when stopping `SparkContext`, or all the future child threads will still inherit it and copy the properties and waste memory.
      
      This is the root cause of https://issues.apache.org/jira/browse/SPARK-20548
      
       , which creates/stops `SparkContext` many times and finally have a lot of `InheritableThreadLocal` alive, and cause OOM when starting new threads in the internal thread pools.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17833 from cloud-fan/core.
      
      (cherry picked from commit b946f316)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      d10b0f65
  12. May 01, 2017
    • Ryan Blue's avatar
      [SPARK-20540][CORE] Fix unstable executor requests. · 5915588a
      Ryan Blue authored
      
      There are two problems fixed in this commit. First, the
      ExecutorAllocationManager sets a timeout to avoid requesting executors
      too often. However, the timeout is always updated based on its value and
      a timeout, not the current time. If the call is delayed by locking for
      more than the ongoing scheduler timeout, the manager will request more
      executors on every run. This seems to be the main cause of SPARK-20540.
      
      The second problem is that the total number of requested executors is
      not tracked by the CoarseGrainedSchedulerBackend. Instead, it calculates
      the value based on the current status of 3 variables: the number of
      known executors, the number of executors that have been killed, and the
      number of pending executors. But, the number of pending executors is
      never less than 0, even though there may be more known than requested.
      When executors are killed and not replaced, this can cause the request
      sent to YARN to be incorrect because there were too many executors due
      to the scheduler's state being slightly out of date. This is fixed by tracking
      the currently requested size explicitly.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Ryan Blue <blue@apache.org>
      
      Closes #17813 from rdblue/SPARK-20540-fix-dynamic-allocation.
      
      (cherry picked from commit 2b2dd08e)
      Signed-off-by: default avatarMarcelo Vanzin <vanzin@cloudera.com>
      5915588a
    • jerryshao's avatar
      [SPARK-20517][UI] Fix broken history UI download link · 868b4a1a
      jerryshao authored
      
      The download link in history server UI is concatenated with:
      
      ```
       <td><a href="{{uiroot}}/api/v1/applications/{{id}}/{{num}}/logs" class="btn btn-info btn-mini">Download</a></td>
      ```
      
      Here `num` field represents number of attempts, this is not equal to REST APIs. In the REST API, if attempt id is not existed the URL should be `api/v1/applications/<id>/logs`, otherwise the URL should be `api/v1/applications/<id>/<attemptId>/logs`. Using `<num>` to represent `<attemptId>` will lead to the issue of "no such app".
      
      Manual verification.
      
      CC ajbozarth can you please review this change, since you add this feature before? Thanks!
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #17795 from jerryshao/SPARK-20517.
      
      (cherry picked from commit ab30590f)
      Signed-off-by: default avatarMarcelo Vanzin <vanzin@cloudera.com>
      868b4a1a
  13. Apr 28, 2017
  14. Apr 25, 2017
    • Xiao Li's avatar
      [SPARK-20439][SQL][BACKPORT-2.1] Fix Catalog API listTables and getTable when... · 6696ad0e
      Xiao Li authored
      [SPARK-20439][SQL][BACKPORT-2.1] Fix Catalog API listTables and getTable when failed to fetch table metadata
      
      ### What changes were proposed in this pull request?
      
      This PR is to backport https://github.com/apache/spark/pull/17730 to Spark 2.1
      --- --
      `spark.catalog.listTables` and `spark.catalog.getTable` does not work if we are unable to retrieve table metadata due to any reason (e.g., table serde class is not accessible or the table type is not accepted by Spark SQL). After this PR, the APIs still return the corresponding Table without the description and tableType)
      
      ### How was this patch tested?
      Added a test case
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17760 from gatorsmile/backport-17730.
      6696ad0e
    • Patrick Wendell's avatar
      8460b090
    • Patrick Wendell's avatar
    • jerryshao's avatar
      [SPARK-20239][CORE][2.1-BACKPORT] Improve HistoryServer's ACL mechanism · 359382c0
      jerryshao authored
      Current SHS (Spark History Server) has two different ACLs:
      
      * ACL of base URL, it is controlled by "spark.acls.enabled" or "spark.ui.acls.enabled", and with this enabled, only user configured with "spark.admin.acls" (or group) or "spark.ui.view.acls" (or group), or the user who started SHS could list all the applications, otherwise none of them can be listed. This will also affect REST APIs which listing the summary of all apps and one app.
      * Per application ACL. This is controlled by "spark.history.ui.acls.enabled". With this enabled only history admin user and user/group who ran this app can access the details of this app.
      
      With this two ACLs, we may encounter several unexpected behaviors:
      
      1. if base URL's ACL (`spark.acls.enable`) is enabled but user A has no view permission. User "A" cannot see the app list but could still access details of it's own app.
      2. if ACLs of base URL (`spark.acls.enable`) is disabled, then user "A" could download any application's event log, even it is not run by user "A".
      3. The changes of Live UI's ACL will affect History UI's ACL which share the same conf file.
      
      The unexpected behaviors is mainly because we have two different ACLs, ideally we should have only one to manage all.
      
      So to improve SHS's ACL mechanism, here in this PR proposed to:
      
      1. Disable "spark.acls.enable" and only use "spark.history.ui.acls.enable" for history server.
      2. Check permission for event-log download REST API.
      
      With this PR:
      
      1. Admin user could see/download the list of all applications, as well as application details.
      2. Normal user could see the list of all applications, but can only download and check the details of applications accessible to him.
      
      New UTs are added, also verified in real cluster.
      
      CC tgravescs vanzin please help to review, this PR changes the semantics you did previously. Thanks a lot.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #17755 from jerryshao/SPARK-20239-2.1-backport.
      359382c0
    • Sergey Zhemzhitsky's avatar
      [SPARK-20404][CORE] Using Option(name) instead of Some(name) · 2d47e1aa
      Sergey Zhemzhitsky authored
      
      Using Option(name) instead of Some(name) to prevent runtime failures when using accumulators created like the following
      ```
      sparkContext.accumulator(0, null)
      ```
      
      Author: Sergey Zhemzhitsky <szhemzhitski@gmail.com>
      
      Closes #17740 from szhem/SPARK-20404-null-acc-names.
      
      (cherry picked from commit 0bc7a902)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      2d47e1aa
    • Armin Braun's avatar
      [SPARK-20455][DOCS] Fix Broken Docker IT Docs · 65990fc5
      Armin Braun authored
      
      ## What changes were proposed in this pull request?
      
      Just added the Maven `test`goal.
      
      ## How was this patch tested?
      
      No test needed, just a trivial documentation fix.
      
      Author: Armin Braun <me@obrown.io>
      
      Closes #17756 from original-brownbear/SPARK-20455.
      
      (cherry picked from commit c8f12195)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      65990fc5
    • Sameer Agarwal's avatar
      [SPARK-20451] Filter out nested mapType datatypes from sort order in randomSplit · 42796659
      Sameer Agarwal authored
      
      ## What changes were proposed in this pull request?
      
      In `randomSplit`, It is possible that the underlying dataset doesn't guarantee the ordering of rows in its constituent partitions each time a split is materialized which could result in overlapping
      splits.
      
      To prevent this, as part of SPARK-12662, we explicitly sort each input partition to make the ordering deterministic. Given that `MapTypes` cannot be sorted this patch explicitly prunes them out from the sort order. Additionally, if the resulting sort order is empty, this patch then materializes the dataset to guarantee determinism.
      
      ## How was this patch tested?
      
      Extended `randomSplit on reordered partitions` in `DataFrameStatSuite` to also test for dataframes with mapTypes nested mapTypes.
      
      Author: Sameer Agarwal <sameerag@cs.berkeley.edu>
      
      Closes #17751 from sameeragarwal/randomsplit2.
      
      (cherry picked from commit 31345fde)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      42796659
  15. Apr 24, 2017
    • Eric Liang's avatar
      [SPARK-20450][SQL] Unexpected first-query schema inference cost with 2.1.1 · d99b49b1
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-19611 fixes a regression from 2.0 where Spark silently fails to read case-sensitive fields missing a case-sensitive schema in the table properties. The fix is to detect this situation, infer the schema, and write the case-sensitive schema into the metastore.
      
      However this can incur an unexpected performance hit the first time such a problematic table is queried (and there is a high false-positive rate here since most tables don't actually have case-sensitive fields).
      
      This PR changes the default to NEVER_INFER (same behavior as 2.1.0). In 2.2, we can consider leaving the default to INFER_AND_SAVE.
      
      ## How was this patch tested?
      
      Unit tests.
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #17749 from ericl/spark-20450.
      d99b49b1
  16. Apr 22, 2017
    • Bogdan Raducanu's avatar
      [SPARK-20407][TESTS][BACKPORT-2.1] ParquetQuerySuite 'Enabling/disabling... · ba505805
      Bogdan Raducanu authored
      [SPARK-20407][TESTS][BACKPORT-2.1] ParquetQuerySuite 'Enabling/disabling ignoreCorruptFiles' flaky test
      
      ## What changes were proposed in this pull request?
      
      SharedSQLContext.afterEach now calls DebugFilesystem.assertNoOpenStreams inside eventually.
      SQLTestUtils withTempDir calls waitForTasksToFinish before deleting the directory.
      
      ## How was this patch tested?
      New test but marked as ignored because it takes 30s. Can be unignored for review.
      
      Author: Bogdan Raducanu <bogdan@databricks.com>
      
      Closes #17720 from bogdanrdc/SPARK-20407-BACKPORT2.1.
      ba505805
  17. Apr 21, 2017
  18. Apr 20, 2017
    • Wenchen Fan's avatar
      [SPARK-20409][SQL] fail early if aggregate function in GROUP BY · 66e7a8f1
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      It's illegal to have aggregate function in GROUP BY, and we should fail at analysis phase, if this happens.
      
      ## How was this patch tested?
      
      new regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17704 from cloud-fan/minor.
      66e7a8f1
  19. Apr 19, 2017
  20. Apr 18, 2017
  21. Apr 17, 2017
    • Xiao Li's avatar
      [SPARK-20349][SQL][REVERT-BRANCH2.1] ListFunctions returns duplicate functions... · 3808b472
      Xiao Li authored
      [SPARK-20349][SQL][REVERT-BRANCH2.1] ListFunctions returns duplicate functions after using persistent functions
      
      Revert the changes of https://github.com/apache/spark/pull/17646 made in Branch 2.1, because it breaks the build. It needs the parser interface, but SessionCatalog in branch 2.1 does not have it.
      
      ### What changes were proposed in this pull request?
      
      The session catalog caches some persistent functions in the `FunctionRegistry`, so there can be duplicates. Our Catalog API `listFunctions` does not handle it.
      
      It would be better if `SessionCatalog` API can de-duplciate the records, instead of doing it by each API caller. In `FunctionRegistry`, our functions are identified by the unquoted string. Thus, this PR is try to parse it using our parser interface and then de-duplicate the names.
      
      ### How was this patch tested?
      Added test cases.
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17661 from gatorsmile/compilationFix17646.
      3808b472
Loading