Skip to content
Snippets Groups Projects
Commit 2bc5e061 authored by alyaxey's avatar alyaxey Committed by Shivaram Venkataraman
Browse files

[SPARK-6246] [EC2] fixed support for more than 100 nodes

This is a small fix. But it is important for amazon users because as the ticket states, "spark-ec2 can't handle clusters with > 100 nodes" now.

Author: alyaxey <oleksii.sliusarenko@grammarly.com>

Closes #6267 from alyaxey/ec2_100_nodes_fix and squashes the following commits:

1e0d747 [alyaxey] [SPARK-6246] fixed support for more than 100 nodes
parent bcb1ff81
No related branches found
No related tags found
No related merge requests found
......@@ -864,7 +864,11 @@ def wait_for_cluster_state(conn, opts, cluster_instances, cluster_state):
for i in cluster_instances:
i.update()
statuses = conn.get_all_instance_status(instance_ids=[i.id for i in cluster_instances])
max_batch = 100
statuses = []
for j in xrange(0, len(cluster_instances), max_batch):
batch = [i.id for i in cluster_instances[j:j + max_batch]]
statuses.extend(conn.get_all_instance_status(instance_ids=batch))
if cluster_state == 'ssh-ready':
if all(i.state == 'running' for i in cluster_instances) and \
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment