Skip to content
  • Davies Liu's avatar
    fb0db772
    [SPARK-2871] [PySpark] add zipWithIndex() and zipWithUniqueId() · fb0db772
    Davies Liu authored
    RDD.zipWithIndex()
    
            Zips this RDD with its element indices.
    
            The ordering is first based on the partition index and then the
            ordering of items within each partition. So the first item in
            the first partition gets index 0, and the last item in the last
            partition receives the largest index.
    
            This method needs to trigger a spark job when this RDD contains
            more than one partitions.
    
            >>> sc.parallelize(range(4), 2).zipWithIndex().collect()
            [(0, 0), (1, 1), (2, 2), (3, 3)]
    
    RDD.zipWithUniqueId()
    
            Zips this RDD with generated unique Long ids.
    
            Items in the kth partition will get ids k, n+k, 2*n+k, ..., where
            n is the number of partitions. So there may exist gaps, but this
            method won't trigger a spark job, which is different from
            L{zipWithIndex}
    
            >>> sc.parallelize(range(4), 2).zipWithUniqueId().collect()
            [(0, 0), (2, 1), (1, 2), (3, 3)]
    
    Author: Davies Liu <davies.liu@gmail.com>
    
    Closes #2092 from davies/zipWith and squashes the following commits:
    
    cebe5bf [Davies Liu] improve test cases, reverse the order of index
    0d2a128 [Davies Liu] add zipWithIndex() and zipWithUniqueId()
    fb0db772
    [SPARK-2871] [PySpark] add zipWithIndex() and zipWithUniqueId()
    Davies Liu authored
    RDD.zipWithIndex()
    
            Zips this RDD with its element indices.
    
            The ordering is first based on the partition index and then the
            ordering of items within each partition. So the first item in
            the first partition gets index 0, and the last item in the last
            partition receives the largest index.
    
            This method needs to trigger a spark job when this RDD contains
            more than one partitions.
    
            >>> sc.parallelize(range(4), 2).zipWithIndex().collect()
            [(0, 0), (1, 1), (2, 2), (3, 3)]
    
    RDD.zipWithUniqueId()
    
            Zips this RDD with generated unique Long ids.
    
            Items in the kth partition will get ids k, n+k, 2*n+k, ..., where
            n is the number of partitions. So there may exist gaps, but this
            method won't trigger a spark job, which is different from
            L{zipWithIndex}
    
            >>> sc.parallelize(range(4), 2).zipWithUniqueId().collect()
            [(0, 0), (2, 1), (1, 2), (3, 3)]
    
    Author: Davies Liu <davies.liu@gmail.com>
    
    Closes #2092 from davies/zipWith and squashes the following commits:
    
    cebe5bf [Davies Liu] improve test cases, reverse the order of index
    0d2a128 [Davies Liu] add zipWithIndex() and zipWithUniqueId()
Loading