1

I have an entity called Contact with a single unique field email. Also I have a form type used intended for an admin interface, let's call it ContactType. Everything described below happens using a form built with ContactType:

Let's assume I want to add a contact with an email mr.validated@example.com, of course it works. Then I try again and bam, validation kicked in, error message says what happened. Perfect!

Now I want to add another contact, this time with an email mr.race.condition@example.com, but oops, I accidentally submitted the form twice! Both requests are processed like this:

 |    Request 1    |     Request 2
-+-----------------+-----------------
1|  $form->bind()  |   $form->bind()
2|   Validation    |    Validation    
3|   $em->flush()  |    $em->flush()

In both cases validation passed since the Contact entity with such email wasn't in database yet. This leads to two Insert queries with the same email. MySQL will prevent the second one, so Doctrine will throw an exception, user will see error 500 instead of "Email has been already taken".

My question is: How do I make Symfony handle that for me? I just want to tell the user that he have to type in different email address.

I could of course do something like this:

try {
    $this->getDoctrine()->getManager()->flush();
} catch (DBALException $e) {
    $pdoException = $e->getPrevious();
    if ($pdoException &&
        $pdoException instanceof PDOException &&
        $pdoException->getCode() === '23000'
    ) {
// let the form know about the error
    } else throw $e;
}

But that's wrong, requires copy-pasting the code each time I have to deal with unique constraints, and is trouble in case there is more than one unique index.

Adam Zielinski
  • 2,774
  • 1
  • 25
  • 36
  • You probably want a unique validator, check [this](http://symfony.com/doc/current/reference/constraints/UniqueEntity.html). Never used Symfony before though so could be wrong! – Aydin Hassan Dec 14 '13 at 23:10
  • The question is about one specific case where unique validator is not helpful :( – Adam Zielinski Dec 14 '13 at 23:31
  • 1
    Your scenario is confusing. Are you asking what happens if the user presses submit twice? I don't really see how that can happen without the first request being processed in which case the second request would fail the unique validation. – Cerad Dec 15 '13 at 17:33
  • My point is there is a race condition since `INSERT INTO` happens after validation so some other insert could happen in the meantime. Pressing submit twice is just an example to show it - there could be multiple users submitting that form in the same moment. – Adam Zielinski Dec 15 '13 at 17:43
  • In other words - validation helps, but isn't 100% reliable for unique constraints, so I need a way to recover from DB errors – Adam Zielinski Dec 15 '13 at 17:47
  • There is [locking support](http://docs.doctrine-project.org/en/2.0.x/reference/transactions-and-concurrency.html#locking-support) in doctrine2 – praxmatig Dec 16 '13 at 08:10
  • I know about it, but it is not helpful with `INSERT` unless I set an exclusive lock for entire table which is a bad thing to do. It's okay if mysql throws an error in those rare cases, I just need some nice way to handle it – Adam Zielinski Dec 16 '13 at 14:11

1 Answers1

2

This is probably not a real answer in the style of SO, as much as my personal opinion, but I'm just trying to help you out. I'll be a bit critical and get many down votes here, but that's okay as long as you feel more confident about the problem at hand.


EDIT: added a code example

If you really, really need that solved look into the PHP mutex and its derivatives. Just protect (wrap) your critical race condition code inside a locked code section and you'll be sure not two threads can execute at the same time (as long as you have only one front-end machine).

An example usage is here:

$file = fopen("code_section_001.lock", "w+");

if (flock($file,LOCK_EX))
{
    // $form->bind()
    // Validation
    // $em->flush()

  flock($file,LOCK_UN);
}
else
{
  echo "Error locking file!";
}

fclose($file);

If you can, use try {} finally {}, this depends on the PHP version you are using.

That said, I'd discourage this practice because it has some performance implications.


I have the impression you are overshooting. How much time passes between validation and insert? Microseconds? And how many subscribers will enter their email in a day? How often would they use the same e-mail? Like... never? And if all of these coincidences lead to that race condition then a HTTP 500 error code is not that wrong.

Because if it is a legitimate user:

  • he will register once;
  • he will use his email which is by definition unique;
  • he can retry in the remote event of an error page of some kind.

On the contrary, a non-legitimate user (bot?) he will:

  • register multiple times;
  • use random or well-known or stolen email addresses;
  • retry in a flurry of HTTP requests.

In this second case I'd advise you to respond with a 500! That's what a website should be supposed to do.

You've already taken steps for a graceful error recovery (the custom error message) so a real human user will most probably not see an HTTP 500 error.

By the way, PHP and Symfony often present problems like these, and to solve it in a perfect way means messing up an otherwise simple and clean code. Is it really worth it, cluttering your code?

And consider that emails are not necessarily unique. I can, for example, use the not-so-well-known suffix feature. It is supported by gmail and others and allows to have a + suffix, like this:

someone@somewhere.com
someone+one@somewhere.com
someone+two@somewhere.com

All three addresses will go to the same inbox (someone@somewhere.com) but are they literally unique?

Maybe you should invest more into those details than the microsecond window of failure. The important thing is that the database doesn't insert multiples, which you've already done through the unique constraint. That error message not being very clean is really secondary.

And have you ever produced it? I mean with a test setup and actually obtained the 500 instead of the error page?

pid
  • 11,472
  • 6
  • 34
  • 63
  • I think you are right, I posted this question out of curiosity - the issue never happened to me – Adam Zielinski Sep 06 '14 at 17:54
  • regarding the mutex part - won't work for a distributed system – Adam Zielinski Sep 06 '14 at 17:55
  • Yes I know, that's why I specified *as long as you have one front-end machine*. But you could also have a distributed/shared file system so the `flock()` would work across many nodes but with a major performance hit. Again, when you've already WON the cup do you also need to beat the Guinness World Record? :) I hope I make sense here, buddy. – pid Sep 06 '14 at 18:00