2

My requirement is as follows. (a) I have already sequence created and one table is (lets assume employee having id,name..etc). (b) Some how my sequence get corrupted and currently current value of sequence is not sync with the max value of id column of employee table.


now i want to reset my sequence to the max value of the id column of employee table. i know we can do it easily by using PL/SQL,Stored procedure. but i want to write plain query which will do following tasks. 1- Fetch max value of id and current value of my sequence .Take a difference and add that difference to the sequence by using increment by.( here my current value of sequence is lesser than max id value of id column)

Patrick Hofman
  • 153,850
  • 22
  • 249
  • 325
Saurabh Gupta
  • 129
  • 1
  • 2
  • 5

2 Answers2

0

You change the values of a sequence with the 'ALTER SEQUENCE' command.

To restart the sequence with a new base value, you need to drop and recreate it.

I do not think you can do this with a straightforward SELECT query.

Here is the Oracle 10g documentation for ALTER SEQUENCE.

cms
  • 5,864
  • 2
  • 28
  • 31
0

You can't change the increment from plain SQL as alter sequence is DDL, so you need to increment it multiple times, one by one. This would increment the sequence as many times as the highest ID you currently have:

select your_sequence.nextval
from (
  select max(id) as max_id
  from your_table
) t
connect by level < t.max_id;

SQL Fiddle demo (fudged a bit as the sequence isn't reset if the schema is cached).

If you have a high value though that might be inefficient, though as a one-off adjustment that probably doesn't matter. You can't refer to the current sequence value in a subquery or CTE, but you could look at the USER_SEQUNECES view to get a rough guide of how far out you are to begin with, and reduce the number of calls to within double the case size (depending on how many waiting values the cache holds):

select your_sequence.nextval
from (
  select max(id) as max_id
  from your_table
) t
connect by level <= (
  select t.max_id + us.cache_size + 1 - us.last_number
  from user_sequences us
  where sequence_name = 'YOUR_SEQUENCE'
);

SQL Fiddle.

With low existing ID values the second one might do more work, but with higher values you can see the second comes into its own a bit.

Alex Poole
  • 183,384
  • 11
  • 179
  • 318
  • Thank you for your reply but i have following issue while using this approach. 1- In first approach, Problem occur as follows.(for example : currently seq. currently value =12, max id=20. Ideally it should set current value=20 but as per the analysis it found that as level values start with 1 so condition(1<20) inset 2o record in sequence and this approach failed. 2- In second , i can update only single value which i can simple do by seq.next val.( Note : if we can set level value= 12[current value of seq then this will successful] ) Advance Thanks !!! – Saurabh Gupta Dec 22 '14 at 08:33
  • @SaurabhGupta - wanting to set the sequence to 20 implies you don't want a gap in your IDs, but sequences are *not* gap-free anyway, so I'm not sure why that 'fails'. In both cases the sequence may end up higher that the old max ID (possibly a lot higher in the first version; 2*cache_size in the second). With those values the first ends up with 30, the second with 31. Not sure what you mean by 'I can update only single value' as nextval is still called multiple times by the connect-by. – Alex Poole Dec 22 '14 at 10:13
  • 1- Lets take an example Seq.currval=12 Max(id) =20 Now if i go ahead with first approach then we have specified condition as connect by level – Saurabh Gupta Dec 22 '14 at 10:51
  • @SaurabhGupta - if you have a requirement to have no gap then I don't think you can do it in a single SQL statement, because of the restrictions on where you can use currval/nextval. Unless you can use a function to find the next value, but it's not clear if that's allowed. If this is a one-off adjustment it isn't clear why you're not using PL/SQL anyway; if it is something you'll do more than once it suggests a problem with your data model - if the 'corruption' was not a one-off. – Alex Poole Dec 22 '14 at 10:58