0

Consider the following Django ORM example:

class A(models.Model):
    pass

class B(models.Model):
    a = models.ForeignKey('A', on_delete=models.CASCADE, null=True)
    b_key = models.SomeOtherField(doesnt_really_matter=True)

    class Meta:
        unique_together = (('a', 'b_key'),)

Now let's say I delete an instance of A that's linked to an instance of B. Normally, this is no big deal: Django can delete the A object, setting B.a = NULL, then delete B after the fact. This is usually fine because most databases don't consider NULL values in unique constraints, so even if you have b1 = B(a=a1, b_key='non-unique') and b2 = B(a=a2, b_key='non-unique'), and you delete both a1 and a2, that's not a problem because (NULL, 'non-unique') != (NULL, 'non-unique') because NULL != NULL.

However, that's not the case with SQL Server. SQL Server brilliantly defines NULL == NULL, which breaks this logic. The workaround, if you're writing raw SQL, is to use WHERE key IS NOT NULL when defining the unique constraint, but this is generated for me by Django. While I can manually create the RunSQL migrations that I'd need to drop all the original unique constraints and add the new filtered ones, it's definitely a shortcoming of the ORM or driver (I'm using pyodbc + django-pyodbc-azure). Is there some way to either coax Django into generating filtered unique constraints in the first place, or force it to delete tables in a certain order to circumvent this issue altogether, or some other general fix I could apply to the SQL server?

scnerd
  • 5,836
  • 2
  • 21
  • 36
  • Encountering terrible designs like these makes me want to flip all the tables...None of the django specific pyodbc variants seem to be maintained either. You can listen to the [pre_delete signal](https://docs.djangoproject.com/en/2.1/ref/signals/#pre-delete) to delete the related records in `B` before `A`, essentially reversing the cascade. – CoffeeBasedLifeform Nov 06 '18 at 21:08
  • That seems like a good solution... I've always used Postgres before, never had issues like this, then I join a new team that uses SQL Server and it's endlessly one issue after the next. MS should've stopped doing DB's after MS Access. – scnerd Nov 06 '18 at 22:20
  • If you are not expecting to add any or too many more models, writing a custom migration would be a neater solution, since this is a problem on the database level that django-pyodbc-azure didn't pick up on when creating its own migrations. By the looks of it, there's also `django.db.backends.base.schema.BaseDatabaseSchemaEditor.sql_create_index` that provides the SQL used to create the indices (and django-pyodbc-azure uses it). Changing the SQL 'there' and rebuilding the database might help? – CoffeeBasedLifeform Nov 06 '18 at 22:56
  • For my use case I ended up taking your first suggestion, just manually writing the SQL to recreate the unique indices with the correct conditions. If you'd like to post that as an answer, I'll mark it as correct, and I'm intrigued if you have further suggestions about how to use the second approach as a more fundamental fix to the problem – scnerd Nov 07 '18 at 01:21

0 Answers0