I'm still trying to understand the correct way to validate a Django model object using a custom validator at the model level. I know that validation is usually done within a form or model form. However, I want to ensure the integrity of my data at the model level if I'm interacting with it via the ORM in the Python shell. Here's my current approach:
from django.db import models
from django.core import validators
from django.core exceptions import ValidationError
def validate_gender(value):
""" Custom validator """
if not value in ('m', 'f', 'M', 'F'):
raise ValidationError(u'%s is not a valid value for gender.' % value)
class Person(models.Model):
name = models.CharField(max_length=128)
age = models.IntegerField()
gender = models.CharField(maxlength=1, validators=[validate_gender])
def save(self, *args, **kwargs):
""" Override Person's save """
self.full_clean(exclude=None)
super(Person, self).save(*args, **kwargs)
Here are my questions:
Should I create a custom validation function, designate it as a validator, and then override the Person's save() function as I've done above? (By the way, I know I could validate my gender choices using the 'choices' field option but I created 'validate_gender' for the purpose of illustration).
If I really want to ensure the integrity of my data, should I not only write Django unit tests for testing at the model layer but also equivalent database-level unit tests using Python/Psycopg? I've noticed that Django unit tests, which raise ValidationErrors, only test the model's understanding of the database schema using a copy of the database. Even if I were to use South for migrations, any database-level constraints are limited to what Django can understand and translate into a Postgres constraint. If I need a custom constraint that Django can't replicate, I could potentially enter data into my database that violates that constraint if I'm interacting with the database directly via the psql terminal.
Thanks!