I have a django view function representing a redemption process of a prize. I'm trying to eliminate the possible race condition when setting a redemption limit based on the count of a linked model (Redemption). At a high level my view function looks like this:
def redeem(request, prize_id):
# Get a prize model that contains a limit attribute
prize = Prize.objects.get(id=prize_id)
# Check the count of a set of another model representing redemption objects
if (prize.redemption_set.count() >= prize.redemption_limit):
return error_page("Reached redemption limit")
else:
# Run some API calls that redeem the prize
# Create a redemption object in the DB
redemption = Redemption(prize=prize)
redemption.save()
So my main concern is when concurrent requests come in I can see the possibility of extra redemptions occurring if the count isn't updated by the time another request comes in. I was looking at select_for_update()
and atomic requests but I don't want errors to occur if the model is locked. I mainly want to make sure the redeem calls enter a queue and are rejected appropriately when reaching the redemption limit. I'll have multiple web workers and my DB is mySQL.
Thanks for any tips!