0

I have a table as follows:

id    status
--    ------
1     pass
1     fail
1     pass
1     na
1     na

Also, I have a stored procedure that returns a table with top 100 records having status as 'na'. The stored procedure can be called by multiple nodes in an environment and I don't want them to fetch duplicate data. So, I want to lock the stored procedure while it is executing and set the status of the records obtained from the stored procedure to 'In Progress' and return that table and then release the lock, so that different nodes don't fetch the same data. How would I accomplish this?

There is already a solution provided for similar question in ms sql but it shows errors when using in sybase.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
Kiran Cyrus Ken
  • 379
  • 1
  • 3
  • 17
  • Two good answers already, but please make sure you get your error handling and transaction management right. Otherwise, you can end up locking the application (a error occurred between claiming and releasing the locks). A more robust solution would be to use a message queue. – Neville Kuyt Jun 28 '19 at 13:01
  • which Sybase RDBMS (ASE? IQ? SQLAnywhere? Advantage?) and version? – markp-fuso Jun 29 '19 at 01:17

3 Answers3

1

I'm not 100% sure how to do this in Sybase. But, the idea is the following.

First, add a new column to the table that represents the session or connection used to change the data. You will use this column to provide isolation.

Then, update the rows:

update top (100) t
    set status = 'in progress',
        session = @session
    where status = 'na'
    order by ?;  -- however you define the "top" records

Then, you can return or process the 100 ids that are "in progress" for the given connection.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
1
  1. Create another table, proc_lock, that has one row
  2. When control enters the stored procedure, start a transaction and do a select for update on the row in proc_lock (see this link). If that doesn't work for Sybase, then you could try the technique from this answer to lock the row.
  3. Before the procedure exits, make sure to commit the transaction.

This will ensure that only one user can execute the proc at a time. When the second user tries to execute the proc, it will block until the first user's lock on the proc_lock row is released (e.g. when transaction is committed)

dcp
  • 54,410
  • 22
  • 144
  • 164
1

Assuming Sybase ASE ...

The bigger issue you'll likely want to consider is whether you want a single process to lock the entire table while you're grabbing your top 100 rows, or if you want other processes to still access the table?

Another question is whether you'd like multiple processes to concurrently pull 100 rows from the table without blocking each other?

I'm going to assume that you a) don't want to lock the entire table and b) you may want to allow multiple processes to concurrently pull rows from the table.

1 - if possible, make sure the table is using datarows locking (default is usually allpages); this will reduce the granularity of locks to the row level (as opposed to page level for allpages); the table will need to be datarows if you want to allow multiple processes to concurrently find/update rows in the table

2 - make sure the lock escalation setting on the table is high enough to ensure a single process's 100 row update doesn't lock the table (sp_setpglockpromote for allpages, sp_setrowlockpromote for datarows); the key here is to make sure your update doesn't escalate to a table-level lock!

3 - when it comes time to grab your set of 100 rows you'll want to ... inside a transaction ... update the 100 rows with a status value that's unique to your session, select the associated id's, then update the status again to 'In Progress'

The gist of the operation looks like the following:

declare @mysession varchar(10)

select  @mysession = convert(varchar(10),@@spid)  -- replace @@spid with anything that
                                                  -- uniquely identifies your session
set rowcount 100  -- limit the update to 100 rows

begin tran get_my_rows

    -- start with an update so that get exclusive access to the desired rows;
    -- update the first 100 rows you find with your @@spid

    update mytable
    set    status = @mysession   -- need to distinguish your locked rows from
                                 -- other processes; if we used 'In Progress'
                                 -- we wouldn't be able to distinguish between
                                 -- rows update earlier in the day or updated
                                 -- by other/concurrent processes

    from   mytable readpast      -- 'readpast' allows your query to skip over
                                 -- locks held by other processes but it only
                                 -- works for datarows tables
    where  status = 'na'

    -- select your reserved id's and send back to the client/calling process

    select  id
    from    mytable
    where   status = @mysession

    -- update your rows with a status of 'In Progress'

    update mytable
    set    status = 'In Progress'
    where  status = @mysession

commit            -- close out txn and release our locks

set rowcount 0    -- set back to default of 'unlimited' rows

Potential issues:

  • if your table is large and you don't have an index on status then your queries could take longer than necessary to run; by making sure lock escalation is high enough and you're using datarows locking (so the readpast works) you should see minimal blocking of other processes regardless of how long it takes to find the desired rows

  • with an index on the status column, consider that all of these updates are going to force a lot of index updates which is probably going to lead to some expensive deferred updates

  • if using datarows and your lock escalation is too low then an update could look the entire table, which would cause another (concurrent) process to readpast the table lock and find no rows to process

  • if using allpages you won't be able to use readpast so concurrent processes will block on your locks (ie, they won't be able to read around your lock)

  • if you've got an index on status, and several concurrent processes locking different rows in the table, there could be a chance for deadlocks to occur (likely in the index tree of the index on the status column) which in turn would require your client/application to be coded to expect and address deadlocks

To think about:

  • if the table is relatively small such that table scanning isn't a big cost, you could drop any index on the status column and this should reduce the performance overhead of deferred updates (related to updating the indexes)

  • if you can work with a session specific status value (eg, 'In Progress - @mysession') then you could eliminate the 2nd update statement (could come in handy if you're incurring deferred updates on an indexed status column)

  • if you have another column(s) in the table that you could use to uniquely identifier your session's rows (eg, last_updated_by_spid = @@spid, last_updated_date = @mydate - where @mydate is initially set to getdate()) then your first update could set the status = 'In Progress', the select would use @@spid and @mydate for the where clause, and the second update would not be needed [NOTE: This is, effectively, the same thing Gordon is trying to address with his session column.]

  • assuming you can work with a session specific status value, consider using something that will allow you to track, and fix, orphaned rows (eg, row status remains 'In Progress - @mysession' because the calling process died and never came back to (re)set the status)

  • if you can pass the id list back to the calling program as a single string of concatenated id values you could use the method I outline in this answer to append the id's into a @variable during the first update, allowing you to set status = 'In Progress' in the first update and also allowing you to eliminate the select and the second update

  • how would you tell which rows have been orphaned? you may want the ability to update a (small)datetime column with the getdate() of when you issued your update; then, if you would normally expect the status to be updated within, say, 5 minutes, you could have a monitoring process that looks for orphaned rows where status = 'In Progress' and its been more than, say, 10 minutes since the last update

If the datarows, readpast, lock escalation settings and/or deadlock potential is too much, and you can live with brief table-level locks on the table, you could have the process obtain an exclusive table level lock before performing the update and select statements; the exclusive lock would need to be obtained within a user-defined transaction in order to 'hold' the lock for the duration of your work; a quick example:

begin tran get_my_rows

    -- request an exclusive table lock; wait until it's granted

    lock table mytable in exclusive mode

    update ...

    select ...

    update ...

commit
markp-fuso
  • 28,790
  • 4
  • 16
  • 36