0

I have a table with 281,433 records in it, ranging from March 2010 to the current date (Sept 2014). It's a transaction table which consists of records that determine stock which is currently in and out of the warehouse.

When making picks from the warehouse, the system needs to look over every transaction from a particular customer that was ever made (based on the AccountListID field, which determines the customer, a customer might on average have about 300 records in the table). This happens 2-3 times per request from the particular .NET application when a picking run is done.

There are times when the database seemingly locks out. Some requests complete no bother, within about 3 seconds. Others hang for 'up to 4 minutes' according to the end users.

My guess is with 4-5 requests at the same time all looking at this one transaction table things are getting locked up.

I'm thinking about partitioning this table so that the primary transaction table only contains record from the last 2 years. The end user has agreed that any records past this date are unnecessary.

But I can't just delete them, they're used elsewhere in the system. I have indexes already in place and they make a massive difference (going from >30 seconds to <2, on the accountlistid field). It seems partitioning is the next step.

1) Am I going down the right route as a solution to my 'locking' problem? 2) When moving a set of records (e.g. records where the field DateTimeCheckedIn is more than 2 years old) is this a manual process or does partitioning automatically do this?

A-Bop
  • 68
  • 1
  • 11
  • 1
    I don't think that partitioning affects locking, but I could be wrong. I'd look first at "Transaction Isolation Level"s. – Tab Alleman Sep 19 '14 at 13:57
  • Thank you Tab, I was just reading up on that from http://technet.microsoft.com/en-us/library/ms175519(v=sql.105).aspx – A-Bop Sep 19 '14 at 14:02
  • 'reading list': http://stackoverflow.com/questions/2471055/why-use-a-read-uncommitted-isolation-level http://stackoverflow.com/questions/6945487/slow-sql-transaction-blocks-table – A-Bop Sep 19 '14 at 15:27

2 Answers2

2

Partitioning shouldn't be necessary on a table with fewer than 300,000 rows, unless each record is really big. If a record is occupying more than 4k bytes, then you have 300,000 pages (2,400,000,000 bytes) and that is getting larger.

Indexes are usually the solution for something like this. Taking more than a second to return 300 records in an indexed database seems like a long time (unless the records are really big and the network overhead adds to the time). Your table and index should both fit into memory. Check your memory configuration.

The next question is about the application code. If it uses cursors, then these might be the culprit by locking rows under certain circumstances. For read-only cursors, "FAST_FORWARD" or "FORWARD READ_ONLY" should be fast. It is possible that if the application code is locking all the historical records, then you might get contention. After all, this would occur when two records (for different) customers are on the same data page. The solution is to not lock the historical records as you read them. Or, to avoid using cursors all together.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
  • What are cursors? It's coming from a .NET c# web application if that helps. I haven't intentionally used cursors (I don't know what they are!) – A-Bop Sep 19 '14 at 15:23
1

I don't think partitioning will be necessary here. You can probably fix this with a well-placed index: I'm thinking a single index covering (in order) company, part number, and quantity. Or, if it's an old server, possibly just add ram. Finally, since this is reading a lot of older data for transactions, where individual transactions themselves are likely never (or at most very rarely) updated once written, you might do better with a READ UNCOMMITTED isolation level for this query.

Joel Coehoorn
  • 399,467
  • 113
  • 570
  • 794
  • Moving server was my first option which we've done, it's only 1gb but has about 20% availability at all times even during picks (i've looked into perfmon and there's nothing drastic happening, it never peaks memory). I will look into read uncomitted, this seems like the next logical step. – A-Bop Sep 19 '14 at 15:25
  • 1GB is a ridiculously small amount of RAM for sql server in most cases. Just because perfmon doesn't show memory peaking, it doesn't mean sql server wouldn't like to use more. You'll likely see dramatically different performance by moving to a reasonable box. – Joel Coehoorn Sep 19 '14 at 15:58
  • There is literally this one site on this box with windows web server and web mssql edition. Adding another gig will be my next option. – A-Bop Sep 19 '14 at 20:52
  • 1
    IIS is on there, too? And you only have 1GB for the whole box? Get that thing up to like 8GB! As a starting point! Memory is cheap now. You want enough ram there for base Windows to be happy (2GB-ish), IIS to do it's thing (another 1-4GB, based more on traffic than site size), enough for Sql Server to keep the entire DB data + indexes (see sp_spaceused) in memory, plus some room to grow as new transactions are added. – Joel Coehoorn Sep 19 '14 at 20:56
  • Thanks for your recommendations. We rarely host workflow systems that constantly need to query our databases (we mostly do brochure sites) so a dozen app pools + IIS + mssql usually suits us fine as at 4gb theres minimal db requests/ 200 row tables max. So obviously 8gb for one site is a culture shock! But I believe you're correct. Just need to convince the client to throw more money at it!! – A-Bop Sep 19 '14 at 21:06