As already suggested, there's no limit to the number of elements.
There might be, however, limits to the amount of memory used by a single RDD record: Spark limits the maximum partition size to 2GB (see SPARK-6235). Each partition is a collection of records, so theoretically the upper limit for a record is 2GB (reaching this limit when each partition contains a single record).
In practice, records exceeding a few megabytes are discouraged, because the limit mentioned above will probably force you to artificially increase the number of partitions beyond what would otherwise be the optimum. All of Spark's optimization considerations aim for handling as many records as you wish (given sufficient resources), not for handling records as large as you wish.