0

i tried to insert 1 million rows in a test table using sequence and using the query (using select max) which i posted below. Surprisingly, the query with with select max created 1 million rows in 11:11 (11 mins 11 secs)

And the query with sequence create 1 million rows in 19:34 (19 mins 11 secs). now i am not sure which is good

Table : SitePage

CREATE TABLE [dbo].[SitePage](
    [PageID] [bigint] NOT NULL,
    [PageName] [nchar](50) NOT NULL,
 CONSTRAINT [PK_SitePage] PRIMARY KEY CLUSTERED 
(
    [PageID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

here are the queries

  1. Query to create Sequence

    create Sequence PageTableSequence
    START WITH 1
    INCREMENT BY 1
    NO CYCLE
    NO CACHE
    ;
    
  2. Query to create 1 million records using Select max(id)

    DECLARE @intFlag INT
    SET @intFlag = 0
    Declare @maxrecords bigint
    set @maxrecords = 0
    while(@maxrecords<1000000)
    
    BEGIN
    WHILE (@intFlag =0)
    BEGIN
      BEGIN TRY
      Insert into SitePage (PageID, PageName) values ((Select max(PageID)+1  from SitePage),'Some Page Name');
      set @intFlag = @@rowcount
    
      END TRY
      BEGIN CATCH
      SET @intFlag=0
      END CATCH
    
    END
    set @maxrecords = @maxrecords+1
    set @intFlag=0
    END
    
    GO
    
  3. Query using Sequence to insert 1 million records

    Declare @maxrecords bigint
    set @maxrecords = 0    
    
    while(@maxrecords<1000000)    
    
    BEGIN      
    
      Insert into SitePage (PageID, PageName) values (next value for PageTableSequence, 'some page name');
    
      set @maxrecords = @maxrecords+1
    
    END
    
    GO
    

why there is such a difference in sequence and select max

RoughPlace
  • 1,111
  • 1
  • 13
  • 23
carrieat
  • 82
  • 5
  • 1
    And... what's the problem? – JotaBe Jun 27 '14 at 12:48
  • That is strange, I would expect the sequence to be alot faster than using max(PageId). – David Roussel Jun 27 '14 at 13:13
  • Hi. It looks like you are stuck because you are afraid of an overhead issue. Is there a way you can duplicate your table so you have a test table to work with? that way you can see for yourself. Next...are you familiar with the stack exchange code review site? That would be an excellent question for that site. Consider your question, (which is better?) We can't function well with those type of open ended questions here. Stack Overflow is for break fix question primary and discover questions secondary. – Frank Tudor Jun 27 '14 at 13:15
  • @FrankTudor I don't think this is a "which is better" question. It's a "why do sequences perform so badly". I guess it could have been worded better. – David Roussel Jun 27 '14 at 13:19
  • So you had a difference in one test. Lots of stuff goes on in a server. Run it 1000 times. You have created your test table. What is the problem? – paparazzo Jun 27 '14 at 13:21

1 Answers1

1

Try not using "NO CACHE", but use "CACHE 100", or even 1000 instead. This should give reduce IO and improve performance. See http://msdn.microsoft.com/en-us/library/ff878091.aspx

David Roussel
  • 5,788
  • 1
  • 30
  • 35
  • +1 to you sir for answering his question...I moved along like everyone else because I felt it more of an opinion question. – Frank Tudor Jun 27 '14 at 13:27
  • The test was to generate continuous primary key values wihtout break, even after server restart. Currently, sql server 2012 will create gaps of 10000 for bigint datatype when there is a server restart. Even with cache. the objective is to test against no cache – carrieat Jun 27 '14 at 13:38
  • If it helps, there are some tips for dealing with the [identity gaps in Sql Server 2012](http://stackoverflow.com/q/14162648/458354) – mdisibio Jun 27 '14 at 23:10