You can obtain a O(NlogX) solution (where N is sample size and X is number of exclusions) by building a list of offsets and reducing your sample range by the number of excluded values. Then use a binary search to adjust the sample values back to the full range by applying the offsets to increase them (i.e. performing the skips):
import random
from bisect import bisect_right
minValue = 111
maxValue = 999
exclusions = [121, 131, 141, 151] # must be sorted
offsets = [n-i for i,n in enumerate(exclusions)]
sampleSize = 15
def getSample():
S = random.sample(range(minValue,maxValue+1-len(exclusions)), sampleSize)
return [n + bisect_right(offsets,n) for n in S]
output:
print(getSample())
[574, 891, 953, 210, 331, 253, 117, 845, 572, 885, 699, 799, 563, 857, 532]
checking for validity of the output:
excludedSet = set(exclusions)
for _ in range(10000):
s = getSample()
if excludedSet.intersection(s):
print("failed exclusions",s)
break
else:
print("all samples are compliant")
--> all samples are compliant
Note that this assumes that the number of excluded values is reasonable. If you are excluding whole subsets or patterns of values, a different solution would probably be better. You would have to be more specific in your question though.