4

I'm wondering about an update I am making to a large table, and whether I need to worry about locks.

I have a table looking like this:

CREATE TABLE "ItemsToProcess"( 
"id" text, 
"WorkerInstanceId" text, 
"ProcessingStartTime" timestamp with time zone, 
"UpdatedTime" timestamp with time zone, 
CONSTRAINT "ITP_PK" PRIMARY KEY ("id")
)WITH (
  OIDS=FALSE
);

Initially, this table has ~2.0 million rows, and only the id column filled in - WorkerInstanceId and the two timestamps are NULL by default and on the start of the run.

What happens is that some worker apps (at least two, but will be around 10-13 in production) will mark a batch of IDs (I plan to set batchSize to 200) from this table for them to process. What happens during processing doesn't really matter now.

The marking of a batch looks like this:

UPDATE "ItemsToProcess" 
   SET "WorkerInstanceId" = ?, "ProcessingStartTime" = current_timestamp()
 WHERE "WorkerInstanceId" is NULL
 LIMIT 200;

My question is, do I need to worry about locking the rows I'm going to update before making the update?

Postgres documentation says:

ROW EXCLUSIVE

Conflicts with the SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes.

The commands UPDATE, DELETE, and INSERT acquire this lock mode on the target table (in addition to ACCESS SHARE locks on any other referenced tables). In general, this lock mode will be acquired by any command that modifies the data in a table.

So I think that whenever one of the workers makes this update, the whole table is locked, 200 rows are updated and the lock is freed up in the end. Until the lock is in place, the other workers are waiting for the lock to free up. Is this right or am I missing something?

Rodrigue
  • 3,617
  • 2
  • 37
  • 49
Zsolt János
  • 491
  • 8
  • 18

2 Answers2

9

UPDATE locks the row, so you do not need to lock it first. If you try to UPDATE overlapping sets of rows simultaneously, the second UPDATE will wait for the first's transaction to commit or roll back.

The big problem with your approach - other than the fact that UPDATE doesn't have a LIMIT clause - is that multiple workers will all try to grab the same rows. Here's what happens:

  • worker1: Filters the table to find 200 rows and locks them
  • worker1: starts updating rows
  • worker2: filters the table to find 200 rows
  • worker2: tries to start updating rows, but has selected the same rows as worker1 so it blocks on worker1's lock
  • worker1: Finishes updating rows
  • worker2: After lock release, re-checks the WHERE condition and finds out that none of the rows match anymore because worker1 has updated them. Updates zero rows.

... and repeat!

You need to either:

  • Have a central queue handing out rows in a proper concurrency-safe way; or
  • Assign workers non-overlapping ranges of IDs to work on

As for LIMIT - you could use WHERE id IN (SELECT t.id FROM thetable t LIMIT 200 ORDER BY id) - but you'd have the same problem with both workers choosing the same set of rows to update.

Rodrigue
  • 3,617
  • 2
  • 37
  • 49
Craig Ringer
  • 307,061
  • 76
  • 688
  • 778
  • 1
    Given the challenges here, it's also a good time to consider if a third-party job queue system would be a better choice. A small one made just for PostgreSQL is [Job::Machine](https://metacpan.org/module/Job::Machine). There are a number of other options. – Mark Stosberg Aug 02 '12 at 00:57
  • 2
    @MarkStosberg Yeah. I didn't want to get into detail on queueing. There's PgQ from skytools too: http://wiki.postgresql.org/wiki/PGQ_Tutorial – Craig Ringer Aug 02 '12 at 02:04
2

You are missing a couple of things.

First, PostgreSQL does not offer a LIMIT option for update. See the docs for UPDATE.

Second, note that ROW EXCLUSIVE does not conflict with itself, it conflicts with SHARE ROW EXCLUSIVE which is different. So, your UPDATE statements can safely run concurrently from multiple workers. You still will want your update times to be low. However, you already have a built-in way to tune that by lowering your batchSize if you run into problems.

Rodrigue
  • 3,617
  • 2
  • 37
  • 49
Mark Stosberg
  • 12,961
  • 6
  • 44
  • 49
  • 1
    For smaller update counts use a `WHERE` clause with an `id IN (SELECT t.id FROM thetable t LIMIT 200 ORDER BY id)` clause. Remember that without `ORDER BY` the use of `LIMIT` is non-deterministic. – Craig Ringer Aug 01 '12 at 23:29