Skip to content
Snippets Groups Projects
  1. Dec 01, 2015
    • Shixiong Zhu's avatar
    • Tathagata Das's avatar
      [SPARK-12004] Preserve the RDD partitioner through RDD checkpointing · 60b541ee
      Tathagata Das authored
      The solution is the save the RDD partitioner in a separate file in the RDD checkpoint directory. That is, `<checkpoint dir>/_partitioner`.  In most cases, whether the RDD partitioner was recovered or not, does not affect the correctness, only reduces performance. So this solution makes a best-effort attempt to save and recover the partitioner. If either fails, the checkpointing is not affected. This makes this patch safe and backward compatible.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #9983 from tdas/SPARK-12004.
      60b541ee
    • Nong Li's avatar
      [SPARK-12030] Fix Platform.copyMemory to handle overlapping regions. · 2cef1cdf
      Nong Li authored
      This bug was exposed as memory corruption in Timsort which uses copyMemory to copy
      large regions that can overlap. The prior implementation did not handle this case
      half the time and always copied forward, resulting in the data being corrupt.
      
      Author: Nong Li <nong@databricks.com>
      
      Closes #10068 from nongli/spark-12030.
      2cef1cdf
    • Josh Rosen's avatar
      [SPARK-12065] Upgrade Tachyon from 0.8.1 to 0.8.2 · 34e7093c
      Josh Rosen authored
      This commit upgrades the Tachyon dependency from 0.8.1 to 0.8.2.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #10054 from JoshRosen/upgrade-to-tachyon-0.8.2.
      34e7093c
    • woj-i's avatar
      [SPARK-11821] Propagate Kerberos keytab for all environments · 6a8cf80c
      woj-i authored
      andrewor14 the same PR as in branch 1.5
      harishreedharan
      
      Author: woj-i <wojciechindyk@gmail.com>
      
      Closes #9859 from woj-i/master.
      6a8cf80c
    • gatorsmile's avatar
      [SPARK-11905][SQL] Support Persist/Cache and Unpersist in Dataset APIs · 0a7bca2d
      gatorsmile authored
      Persist and Unpersist exist in both RDD and Dataframe APIs. I think they are still very critical in Dataset APIs. Not sure if my understanding is correct? If so, could you help me check if the implementation is acceptable?
      
      Please provide your opinions. marmbrus rxin cloud-fan
      
      Thank you very much!
      
      Author: gatorsmile <gatorsmile@gmail.com>
      Author: xiaoli <lixiao1983@gmail.com>
      Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local>
      
      Closes #9889 from gatorsmile/persistDS.
      0a7bca2d
    • Wenchen Fan's avatar
      [SPARK-11954][SQL] Encoder for JavaBeans · fd95eeaf
      Wenchen Fan authored
      create java version of `constructorFor` and `extractorFor` in `JavaTypeInference`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      This patch had conflicts when merged, resolved by
      Committer: Michael Armbrust <michael@databricks.com>
      
      Closes #9937 from cloud-fan/pojo.
      fd95eeaf
    • Wenchen Fan's avatar
      [SPARK-11856][SQL] add type cast if the real type is different but compatible with encoder schema · 9df24624
      Wenchen Fan authored
      When we build the `fromRowExpression` for an encoder, we set up a lot of "unresolved" stuff and lost the required data type, which may lead to runtime error if the real type doesn't match the encoder's schema.
      For example, we build an encoder for `case class Data(a: Int, b: String)` and the real type is `[a: int, b: long]`, then we will hit runtime error and say that we can't construct class `Data` with int and long, because we lost the information that `b` should be a string.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #9840 from cloud-fan/err-msg.
      9df24624
    • Wenchen Fan's avatar
      [SPARK-12068][SQL] use a single column in Dataset.groupBy and count will fail · 8ddc55f1
      Wenchen Fan authored
      The reason is that, for a single culumn `RowEncoder`(or a single field product encoder), when we use it as the encoder for grouping key, we should also combine the grouping attributes, although there is only one grouping attribute.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #10059 from cloud-fan/bug.
      8ddc55f1
    • Cheng Lian's avatar
      [SPARK-12046][DOC] Fixes various ScalaDoc/JavaDoc issues · 69dbe6b4
      Cheng Lian authored
      This PR backports PR #10039 to master
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #10063 from liancheng/spark-12046.doc-fix.master.
      69dbe6b4
    • Shixiong Zhu's avatar
      [SPARK-12060][CORE] Avoid memory copy in JavaSerializerInstance.serialize · 14011665
      Shixiong Zhu authored
      `JavaSerializerInstance.serialize` uses `ByteArrayOutputStream.toByteArray` to get the serialized data. `ByteArrayOutputStream.toByteArray` needs to copy the content in the internal array to a new array. However, since the array will be converted to `ByteBuffer` at once, we can avoid the memory copy.
      
      This PR added `ByteBufferOutputStream` to access the protected `buf` and convert it to a `ByteBuffer` directly.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #10051 from zsxwing/SPARK-12060.
      14011665
    • Liang-Chi Hsieh's avatar
      [SPARK-11949][SQL] Set field nullable property for GroupingSets to get correct... · c87531b7
      Liang-Chi Hsieh authored
      [SPARK-11949][SQL] Set field nullable property for GroupingSets to get correct results for null values
      
      JIRA: https://issues.apache.org/jira/browse/SPARK-11949
      
      The result of cube plan uses incorrect schema. The schema of cube result should set nullable property to true because the grouping expressions will have null values.
      
      Author: Liang-Chi Hsieh <viirya@appier.com>
      
      Closes #10038 from viirya/fix-cube.
      c87531b7
    • Yuhao Yang's avatar
      [SPARK-11898][MLLIB] Use broadcast for the global tables in Word2Vec · a0af0e35
      Yuhao Yang authored
      jira: https://issues.apache.org/jira/browse/SPARK-11898
      syn0Global and sync1Global in word2vec are quite large objects with size (vocab * vectorSize * 8), yet they are passed to worker using basic task serialization.
      
      Use broadcast can greatly improve the performance. My benchmark shows that, for 1M vocabulary and default vectorSize 100, changing to broadcast can help,
      
      1. decrease the worker memory consumption by 45%.
      2. decrease running time by 40%.
      
      This will also help extend the upper limit for Word2Vec.
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #9878 from hhbyyh/w2vBC.
      a0af0e35
  2. Nov 30, 2015
  3. Nov 29, 2015
  4. Nov 28, 2015
Loading