How do I start a cluster with slaves that each have 100GB drive.
./spark-ec2 -k xx -i xx.pem -s 1 --hadoop-major-version=yarn --region=us-east-1 \
--zone=us-east-1b --spark-version=1.6.1 \
--vpc-id=vpc-xx --subnet-id=subnet-xx --ami=ami-yyyyyy \
launch cluster-test
I used an AMI that had the size of 100GB; yet, Spark resized it and started an 8GB drive. How do I increase that limit to 100GB?