Consider the following Django ORM example:
class A(models.Model):
pass
class B(models.Model):
a = models.ForeignKey('A', on_delete=models.CASCADE, null=True)
b_key = models.SomeOtherField(doesnt_really_matter=True)
class Meta:
unique_together = (('a', 'b_key'),)
Now let's say I delete an instance of A that's linked to an instance of B. Normally, this is no big deal: Django can delete the A object, setting B.a = NULL, then delete B after the fact. This is usually fine because most databases don't consider NULL values in unique constraints, so even if you have b1 = B(a=a1, b_key='non-unique')
and b2 = B(a=a2, b_key='non-unique')
, and you delete both a1
and a2
, that's not a problem because (NULL, 'non-unique') != (NULL, 'non-unique')
because NULL != NULL
.
However, that's not the case with SQL Server. SQL Server brilliantly defines NULL == NULL
, which breaks this logic. The workaround, if you're writing raw SQL, is to use WHERE key IS NOT NULL
when defining the unique constraint, but this is generated for me by Django. While I can manually create the RunSQL migrations that I'd need to drop all the original unique constraints and add the new filtered ones, it's definitely a shortcoming of the ORM or driver (I'm using pyodbc + django-pyodbc-azure). Is there some way to either coax Django into generating filtered unique constraints in the first place, or force it to delete tables in a certain order to circumvent this issue altogether, or some other general fix I could apply to the SQL server?