0

I've got a SQL Query (access pass through query) that when running, it locks up the table and I cant do anything with it. Its locked up for over an hour because the table is huge, (database is 250 gigs in size).. The SQL looks like as follows, but I'm hoping there is a way to do the select query without locking up the table..

INSERT INTO BoydAmazonToday1
                      (LocalSKU, [Price Currency], Quantity)
SELECT     TOP (3000000) Inventory.LocalSKU, Inventory.Price, InventorySuppliers.BoydQuantityAvailable
FROM         Inventory INNER JOIN
                      InventorySuppliers ON Inventory.LocalSKU = InventorySuppliers.LocalSKU INNER JOIN
                      Suppliers ON InventorySuppliers.SupplierID = Suppliers.SupplierID
WHERE     (NOT (Inventory.Price = 0)) AND (NOT (Inventory.Price IS NULL)) AND (InventorySuppliers.BoydQuantityAvailable > 49) AND (Inventory.Category LIKE '%Books%' OR
                      Inventory.Category LIKE '%DVDs%' OR
                      Inventory.Category LIKE '%Music%' OR
                      Inventory.Category LIKE '%VHS%') AND (Inventory.Discontinued = 0) AND (Suppliers.[Boyd-AmazonBackOrder] = 1) AND (InventorySuppliers.PrimarySupplier = 1) AND 
                      (NOT (Inventory.LocalSKU = '9780205309023u1')) AND 
                      (NOT (Inventory.LocalSKU = '9780205309023')) AND 
                      (NOT (Inventory.LocalSKU = '9781400052189')) AND 
                      (NOT (Inventory.LocalSKU = '9781400052189U1')) AND
                      (NOT (Inventory.LocalSKU = '9781435732865')) AND 
                      (NOT (Inventory.LocalSKU = '9781435732865U1'))
ORDER BY Inventory.Price ASC
marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
  • To what RDBMS are you passing through this query? SQL Server maybe? – TT. Oct 16 '14 at 04:25
  • yes, it goes to MSSQL 2012 server – Robert Boyd Oct 16 '14 at 04:28
  • I'm curious about the `TOP 3000000`. IS this an arbitrary number? It's possible to do batched updates (say 10000 at a time then commit). It looks like maybe you're trying to do this here? – Nick.Mc Oct 16 '14 at 06:42
  • the 3000000 does change, and I've been looking at doing some type of batch updates.. I do batches for other parts of the code.. – Robert Boyd Oct 17 '14 at 05:17

3 Answers3

0

Have you tried setting the isolation level on the transaction?

http://msdn.microsoft.com/en-us/library/ms173763(v=sql.110).aspx

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 
Vinnie
  • 3,889
  • 1
  • 26
  • 29
  • I havent tried that, let me read up on it and get back to you. – Robert Boyd Oct 16 '14 at 04:44
  • You might also consider putting a integer based category field on the table so that you aren't using a string search ('%DVDs%') if you can avoid it. You could have it added as the row is inserted and then use a lookup table to translate (1=DVD, 2=Music, etc.) – Vinnie Oct 16 '14 at 05:40
0

You may also try (nolock).

SELECT ... 
FROM Inventory (nolock) 
   INNER JOIN InventorySuppliers (nolock) ON Inventory.LocalSKU = InventorySuppliers.LocalSKU 
   INNER JOIN Suppliers (nolock) ON InventorySuppliers.SupplierID = Suppliers.SupplierID
Baz Guvenkaya
  • 1,482
  • 3
  • 17
  • 26
  • I did try nolock but I didnt have it there. I do use nolock on other queries which helps out efficiency a lot. – Robert Boyd Oct 16 '14 at 04:54
  • Then I would suggest you to read this post as well. http://stackoverflow.com/questions/686724/sql-server-when-should-you-use-with-nolock – Baz Guvenkaya Oct 16 '14 at 05:00
  • I'll check out that link as well, the nolcok seems to be helping.. Still redoing parts of the codes. – Robert Boyd Oct 17 '14 at 05:18
  • Worries about using nolock shouldn't be ignored though as you already know. Can you please give it an up if my answer was of assistance? Cheers. – Baz Guvenkaya Oct 17 '14 at 06:58
0

I recommend against using NOLOCK or changing the isolation level to READ UNCOMMITTED unless you can somehow otherwise guarantee that no other processes will be trying to access the table while you are doing your operation. Such operations allow for dirty reads, which can really screw up data integrity (this is the reason the table is locked in the first place).

Consider finding way of improving the efficiency and timeliness of your script rather than overriding the tables locking behavior.

Some considerations:

  • Consider rewriting your where clause to not include wildcards in the form of %...%. Doing this can cause serious inefficiencies in the query engine (for reference, google the term "Sargability").
  • Consider taking advantage of any indexes that are on your table, particularly ensure that you are joining on indexed fields. If any of the joined fields are lacking indexes, consider adding indexes.
Joe
  • 776
  • 7
  • 20
  • I know for a fact other processes will be accessing the from table, but not the one being appended. For this issue, dirty reads would be a priority over lockups because it gets updated again shortly after. – Robert Boyd Oct 17 '14 at 05:20
  • I'm certain that nolock will help. Just keep in mind that when you use it you are side-stepping the mechanisms that SQL Server has in place for the sake of keeping data consistent. I generally consider using nolock (or altering isolation level) to overcome performance issues a pretty major antipattern that should be avoided (the risks generally outweigh the benefits). – Joe Oct 17 '14 at 05:21
  • If dirty reads aren't a concern, then have at it! :-) I'm still twitching a bit though ;-) Food for thought, one of the things that might be causing your query to perform slowly is the number of joins you are doing. If a table is this big, it sounds like it might be used more for warehousing data than for purely transactional work. You might benefit from creating a denomralized table that contains all of the records from both the main table as well as the joined tables. This could improve performance for this sort of work. – Joe Oct 17 '14 at 05:25