- Jan 14, 2014
-
-
Ankur Dave authored
-
Ankur Dave authored
-
Ankur Dave authored
-
Ankur Dave authored
-
- Jan 13, 2014
-
-
Ankur Dave authored
-
Ankur Dave authored
-
Ankur Dave authored
-
Ankur Dave authored
-
Ankur Dave authored
-
Ankur Dave authored
The loop occurred when numEdges < numVertices. This commit fixes it by allowing generateRandomEdges to generate a multigraph.
-
Ankur Dave authored
-
Ankur Dave authored
-
Joseph E. Gonzalez authored
-
Reynold Xin authored
-
Reynold Xin authored
-
Reynold Xin authored
-
Reynold Xin authored
-
Joseph E. Gonzalez authored
-
Joseph E. Gonzalez authored
-
Reynold Xin authored
-
Reynold Xin authored
-
Reynold Xin authored
-
Reynold Xin authored
Conflicts: graphx/src/main/scala/org/apache/spark/graphx/Pregel.scala
-
Reynold Xin authored
-
Joseph E. Gonzalez authored
Improvements in example code for the programming guide as well as adding serialization support for GraphImpl to address issues with failed closure capture.
-
Ankur Dave authored
The bug was due to a misunderstanding of the activeSetOpt parameter to Graph.mapReduceTriplets. Passing EdgeDirection.Both causes mapReduceTriplets to run only on edges with *both* vertices in the active set. This commit adds EdgeDirection.Either, which causes mapReduceTriplets to run on edges with *either* vertex in the active set. This is what connected components needed.
-
Ankur Dave authored
-
Reynold Xin authored
-
Reynold Xin authored
-
Reynold Xin authored
`sbt/sbt doc` used to fail. This fixed it.
-
Ankur Dave authored
Improving documentation and identifying potential bug in CC calculation.
-
Ankur Dave authored
-
Ankur Dave authored
-
Joseph E. Gonzalez authored
-
Ankur Dave authored
-
Ankur Dave authored
-
Patrick Wendell authored
Moved DStream and PairDSream to org.apache.spark.streaming.dstream Similar to the package location of `org.apache.spark.rdd.RDD`, `DStream` has been moved from `org.apache.spark.streaming.DStream` to `org.apache.spark.streaming.dstream.DStream`. I know that the package name is a little long, but I think its better to keep it consistent with Spark's structure. Also fixed persistence of windowed DStream. The RDDs generated generated by windowed DStream are essentially unions of underlying RDDs, and persistent these union RDDs would store numerous copies of the underlying data. Instead setting the persistence level on the windowed DStream is made to set the persistence level of the underlying DStream.
-
Ankur Dave authored
-
Reynold Xin authored
Remove now un-needed hostPort option I noticed this was logging some scary error messages in various places. After I looked into it, this is no longer really used. I removed the option and re-wrote the one remaining use case (it was unnecessary there anyways).
-
Tathagata Das authored
-