We can use a technique commonly used in machine learning to partition data into training and test datasets.
Steps are:
- Use random.shuffle to create a random ordering of data
- Partition the shuffled data based upon sized of desired sublists
Code
import random
def partion_list(a):
"""Partiion list into sublists with 80%/10%/10% splits"""
# Shallow copy of input list
b = A[:] #shallow copy
random.shuffle(b) # inplace shuffle
n = len(b)
# Split with no common elements, but covers all the elements
a1 = b[:int(0.8*n)]
a2 = b[int(0.8*n):int(0.9*n)]
a3 = b[int(0.9*n):]
return a1, a2, a3
Test Code
A = list(range(285)) # test using list of numbers from 0 to 284
a1, a2, a3 = partion_list(A)
print('a1:', len(a1))
print('a2:', len(a2))
print('a3:', len(a3))
Output
a1: 228
a2: 28
a3: 29