2

In sql, will an insert performance differ based on how many records are in the table? i know that the update statement might differ but what about inserts?

for example would the insert statement in a table with 10 million records be the same as an insert statement in another table with the same schema but 1k records?

/*my schema*/
[id] [int] NOT NULL,
[type] [nvarchar](20) NOT NULL,
[relatedP] [int] NULL,
[relatedC] [int] NULL,
[eventdate] [DateTime] NOT NULL

/*no triggers and inserts as simple as*/
INSERT INTO myTable (id,type,relatedP,relatedC,eventdate) VALUES (@id,@type,@relatedP,@RelatedC,@date)
user1492051
  • 886
  • 8
  • 22
  • 2
    It really comes down to the type of indexing you have on the table. If there are a lot of indexes the inserts will take a lot longer as those indexes need to be updated, more leafs created, etc. Or if large fields are used for clustered indexes, etc. If there are no indexes on the table, then the amount of data in the table will not really matter. – Brettski Jan 08 '14 at 05:45
  • 2
    This will depend on many factors, such as constraints, indexes and defaults on the table (e.g. inserts into an unindexed heap table should take a constant time), and whether these can can operate in constant time. Also, indexes will take slightly longer as the page hierarchy will become deeper. – StuartLC Jan 08 '14 at 05:48
  • Thanks alot,thats what i needed to know, i have a simple schema of 5 columns with basic types, there is no indexing in my case, but the size is just getting bigger about 4.4 mil records now, was wondering if i should be concerned – user1492051 Jan 08 '14 at 05:52
  • If you can edit the question and provide us with the schema, indexes, constraints, triggers etc and show how you are doing the inserts, I'm sure you'll get more concrete answers. – StuartLC Jan 08 '14 at 05:55
  • This seems like something you could pretty easily test for yourself with some sample data in your database. – Chris Farmer Jan 08 '14 at 06:05
  • @ChrisFarmer i am familiar with sql managment studio couldn't find a way to find measure, could i just get the time before the insert and the time after the insert and subtract them or there is a better way? – user1492051 Jan 08 '14 at 06:08
  • It also depends on the size and index strategy of referenced tables by FK constraint. But as already have been said, the easiest way to get help is to post the actual execution plan (as XML). – bjnr Jan 08 '14 at 06:15
  • @MihaiBejenariu the id is not for the record but for the user who inserted the record so i dont have a primary key in this table but i join this table with a users tables by id, so this table can have duplicates. – user1492051 Jan 08 '14 at 06:19

1 Answers1

3

Because it appears that your table has no clustered index (neither a primary key which would cause the default clustered index), it will be created as a heap.

Also, your table doesn't have any non-clustered indexes either - usually, as the number of rows in a table with indexes increases, the index depth will also gradually get deeper, meaning insert performance will be slightly degraded.

But since it seems you have no indexes, constraints, defaults, indexed views or persisted computed columns, inserts should be constant time as the table grows.

(IMO a table with no indexing at all is fairly unusual - filtered query performance will likely be abysmal - typically this will only be useful for e.g. log or staging data which is then completely re-read with a reader and processed into final tables.)

Community
  • 1
  • 1
StuartLC
  • 104,537
  • 17
  • 209
  • 285
  • exactly its just an event log table – user1492051 Jan 08 '14 at 06:20
  • As an aside, `BulkCopy` is generally a really performant way of getting data into tables like this. e.g. If you are using .net, you can batch up rows in memory and then intermittently use [SqlBulkCopy](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx) – StuartLC Jan 08 '14 at 06:28
  • i used bulk copy in .NET before, but doesn't it remove the data thats already in the table? – user1492051 Jan 08 '14 at 06:32
  • Neither [bcp](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/4a5dbc21-53b4-4d4d-abe5-08e408da8ad9/can-bcp-delete-all-records-from-a-table-prior-to-bulk-insert) nor `SqlBulkCopy` will by default truncate the table AFAIK. – StuartLC Jan 08 '14 at 07:31