Skip to content
Snippets Groups Projects
Commit 34fc48fb authored by asmith26's avatar asmith26 Committed by Sean Owen
Browse files

[MINOR] Issue: Change "slice" vs "partition" in exception messages (and code?)

## What changes were proposed in this pull request?

Came across the term "slice" when running some spark scala code. Consequently, a Google search indicated that "slices" and "partitions" refer to the same things; indeed see:

- [This issue](https://issues.apache.org/jira/browse/SPARK-1701)
- [This pull request](https://github.com/apache/spark/pull/2305)
- [This StackOverflow answer](http://stackoverflow.com/questions/23436640/what-is-the-difference-between-an-rdd-partition-and-a-slice) and [this one](http://stackoverflow.com/questions/24269495/what-are-the-differences-between-slices-and-partitions-of-rdds)

Thus this pull request fixes the occurrence of slice I came accross. Nonetheless, [it would appear](https://github.com/apache/spark/search?utf8=%E2%9C%93&q=slice&type=) there are still many references to "slice/slices" - thus I thought I'd raise this Pull Request to address the issue (sorry if this is the wrong place, I'm not too familar with raising apache issues).

## How was this patch tested?

(Not tested locally - only a minor exception message change.)

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: asmith26 <asmith26@users.noreply.github.com>

Closes #17565 from asmith26/master.
parent e1afc4dc
No related branches found
No related tags found
No related merge requests found
......@@ -116,7 +116,7 @@ private object ParallelCollectionRDD {
*/
def slice[T: ClassTag](seq: Seq[T], numSlices: Int): Seq[Seq[T]] = {
if (numSlices < 1) {
throw new IllegalArgumentException("Positive number of slices required")
throw new IllegalArgumentException("Positive number of partitions required")
}
// Sequences need to be sliced at the same set of index positions for operations
// like RDD.zip() to behave as expected
......
......@@ -26,7 +26,7 @@ import java.util.List;
/**
* Computes an approximation to pi
* Usage: JavaSparkPi [slices]
* Usage: JavaSparkPi [partitions]
*/
public final class JavaSparkPi {
......
......@@ -32,7 +32,7 @@ import org.apache.spark.sql.SparkSession;
/**
* Transitive closure on a graph, implemented in Java.
* Usage: JavaTC [slices]
* Usage: JavaTC [partitions]
*/
public final class JavaTC {
......
......@@ -21,7 +21,7 @@ package org.apache.spark.examples
import org.apache.spark.sql.SparkSession
/**
* Usage: BroadcastTest [slices] [numElem] [blockSize]
* Usage: BroadcastTest [partitions] [numElem] [blockSize]
*/
object BroadcastTest {
def main(args: Array[String]) {
......
......@@ -23,7 +23,7 @@ import org.apache.spark.sql.SparkSession
/**
* Usage: MultiBroadcastTest [slices] [numElem]
* Usage: MultiBroadcastTest [partitions] [numElem]
*/
object MultiBroadcastTest {
def main(args: Array[String]) {
......
......@@ -100,7 +100,7 @@ object SparkALS {
ITERATIONS = iters.getOrElse("5").toInt
slices = slices_.getOrElse("2").toInt
case _ =>
System.err.println("Usage: SparkALS [M] [U] [F] [iters] [slices]")
System.err.println("Usage: SparkALS [M] [U] [F] [iters] [partitions]")
System.exit(1)
}
......
......@@ -28,7 +28,7 @@ import org.apache.spark.sql.SparkSession
/**
* Logistic regression based classification.
* Usage: SparkLR [slices]
* Usage: SparkLR [partitions]
*
* This is an example implementation for learning how to use Spark. For more conventional use,
* please refer to org.apache.spark.ml.classification.LogisticRegression.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment