1

Basically we have sales people that request leads to call. Right now it tried a "fresh lead" query to get those.

If there aren't any fresh leads it moves on to a "relatively new" query. We call these "sources" and essentially a closer will go through sources until they find a viable lead.

These queries all query the same table, just different groups of data. However, there is a lot of complex sorting on each query and between that and inserts/updates to the table (table being InnoDB) we're experience lots of waits (no deadlocks i'm pretty sure since they don't show in InnoDB status) so my guess is we have slow selects, coupled with lots of inserts/updates.

NOW, the ultimate question IS:

Should we query the DB for each source and grab about 100ish (obviously variable depending on the system) and cache them in memcached. Then, as closers request leads, send them from cache but update the cache to reflect an "is_acccepted" flag. This way we only call each source as we run out of cached leads so just once as we run out, instead of once per closer requesting a lead?

Then we can use simulated locking with memcached - http://code.google.com/p/memcached/wiki/FAQ#Emulating_locking_with_the_add_command

Does this seem like a viable solution? Any recommendations? We need to minimize the chances of lock waits desperately and quickly.

mlebrun15
  • 340
  • 2
  • 9
  • 20

2 Answers2

2

Sounds viable, but have you looked at your indexes and are you using proper isolation levels on your selects?

Previous SO question may help with the answer your seeking: Any way to select without causing locking in MySQL?

If you perform your select/update in a SP with full transaction's this could also speed things up quite a bit due to optimization. Of course, there are times when SP's in MySQL are much slower :(

I'd have put this as a comment, but haven't reached that level yet :)

And I did read the part about inno-db, but experience has shown me improvements even with inno when using isolation levels.

Community
  • 1
  • 1
jdarling
  • 2,438
  • 2
  • 14
  • 10
  • READ COMMITTED is inherently slow as it has to wait for cache and buffers to flush before it can rely upon data integrity. – jdarling Jul 27 '11 at 21:22
  • Have you thought about temporary locks? You could perform an update on the table with HIGH ID AND is_accepted = false and return the record, if the agent didn't take the record you flip is_accepted back so it goes back into the chain. – jdarling Jul 27 '11 at 21:23
  • what would be a better solution/isolation level? – mlebrun15 Jul 27 '11 at 21:23
  • >what would be a better solution/isolation level? READ UNCOMMITTED doesnt require the same locking, at the expense of dirty data reads – Cody Caughlan Jul 27 '11 at 21:24
  • UNCOMMITTED is faster as it can ignore cache/buffer flushes. IE: It can read the record in-memory instead of from disk. Almost like a dirty read, only (if I remember correctly) you won't get corrupt records. Bit more info at http://www.itecsoftware.com/with-nolock-table-hint-equivalent-for-mysql – jdarling Jul 27 '11 at 21:26
1

You should definitely look at making sure your DB queries are fully optimized before you employ another datastore.

If you do decide to cache this data then consider using Redis, which makes lists first class citizens.

Cody Caughlan
  • 32,456
  • 5
  • 63
  • 68