What is the most common performance bottleneck that is not caused by the database structure?
-
See http://stackoverflow.com/questions/18783/sql-what-are-your-favorite-performance-tricks – Lance Roberts Aug 12 '09 at 17:53
17 Answers
Let's see (in no particular order)
Cursors
non-sargable where clauses
failure to index foreign key fields
failure to index fields commonly used in the where clause
correlated subqueries
accidental cross joins causing the need to distinct the result set
Badly performing code coming from ORMs
Code that causes too many reads or is called more than once when it didn't need to be (I've seen applications that send the same code many times when they didn't need to or every time a page is opened)
network pipe overloaded
User defined functions causing row-by-row processing
Parameter sniffing
out of date statistics
Union instead of union all

- 27,836
- 16
- 104
- 169

- 94,695
- 15
- 113
- 186
table scan because:
- index does not exist
- stats out of date
- functions in where clause prevent usage

- 101,727
- 34
- 178
- 212
- Server - Like memory and storage types.
- Network - Latency and bandwidth issues.
- Indexing - not sure if you consider this database structure
- Queries - Bad queries with join issues that may cause full table scans.

- 10,244
- 7
- 35
- 58
-
IMHO indexes are rather database structure (as well as statistic) than it is database logic. – klashar Aug 13 '09 at 19:53
The Applying scalar function to each row in the result set
SELECT fn_EveluateSomeValue([ColumnName]) FROM LargeTable

- 12,676
- 10
- 53
- 63

- 2,519
- 2
- 28
- 38
-
2Not necessarily bad. If you need to apply the function, you have to do it in the database or in the client. If you do it in the database you get parallelism for free. The real issue is using a scalar function in the WHERE clause. – erikkallen Dec 22 '09 at 13:01
-
Note that Scalar UDF's prevent parallelism in the entire plan (as of Sql Server 2012) where as Inline Table Valued Functions do not ([link](http://dba.stackexchange.com/questions/72330)). – crokusek Jan 07 '15 at 22:55
Using table data one row at a time instead of one table at a time (i.e. cursors).
Unnecessary (or badly designed) locks.
Logs logging things that don't need to be logged (Delete from table instead of Truncate table, etc.)
Having constraints on when bulk loading data. Seriously slows down the insert. Turn them off, load the data, then turn them back on.

- 11,361
- 9
- 58
- 87
Ditto ref cursors, also badly written client apps that get a huge recordset back then process it themselves because the developer didn't understand that sql was anything but a huge heap to hold data in.
Trying to do everything in one huge query instead of breaking it down into sensible steps. Sometimes this will not give SQL a chance to correctly reduce the amount of index (hopefully !) pages that it has to look at. Related to this, not understanding how to use temp tables to hand off manipulations prior to updating large indexed tables.
Not having indexes on joins or commonly queried fields.
Not including fields in indexes when you are always seeking and return the same value (for instance formatted name for an employee when you look up using EmployeeID) causing two actions instead of one.
Some low-hanging fruit:
- Scalar UDFs
- omitting
text()
in XPath queries - missing columns in indices (causing key lookups)
- using cursors
- not using fast_forward when possible when cursors are necessary
- using
JOIN
whenAPPLY
will do, or vice versa (test!)

- 10,244
- 1
- 45
- 51
Triggers can be a HUGE bottleneck (and headache) when you are processing large batches of commands.

- 7,370
- 5
- 46
- 62
-
-
Define badly... usually triggers do more than trivial stuff, like cascading updates and/or insert/updates to other tables from my experience in a variety of work environments. True that most of the time this is not done at an exponential level, but I have see a few occasions where "simple" triggers brought the whole database to a grinding halt. – RiddlerDev Aug 12 '09 at 23:49
-
They need not be a problem. I've seen an application where each table had a trigger with a few K LOC (!!!). Bad design, but it didn't affect performance too much. – erikkallen Sep 16 '09 at 10:11
If the I/O system in the server isn't up to the job, you can get latch contention on tempdb and this in turn can cause serious performance problems.

- 9,818
- 5
- 32
- 49
I tend to run into the following bottlenecks (in order of frequency):
- Missing or incorrect indices (resulting in table scans)
- Poorly written queries
- I/O contention
Having a poor my.cnf configuration can kill a server even with proper database design and indexing. Read:
http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/ http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/ http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/
http://www.mysqlperformanceblog.com/2007/08/18/how-fast-can-you-sort-data-with-mysql/ http://www.mysqlperformanceblog.com/2007/09/17/mysql-what-read_buffer_size-value-is-optimal/
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html
Helpful my.cnf analytical tool: http://www.day32.com/MySQL/tuning-primer.sh

- 744
- 1
- 8
- 23
I've never managed to find much useful information on breaking up large queries, in Oracle it seems your advised more to keep everything together in one query rather than use temp tables. You can also get redo log issues if your temp table is holding a lot of data. I'd like to know more/get some links?
For us, adjusting the BIOS Settings and Power Plan settings resulted in solid ~2X faster times for queries that were CPU intensive (searching, duration about 5s).
1) BIOS Settings changed from "Performance Per Watt Optimized (DAPC)" to either "Performance" or "Performance Per Watt Optimized (OS)". Both options seemed to perform similarly however the "OS" controlled one did not run the fan at max continuously.
2) The Windows power plan was changed from "Balanced” to “High performance”.
This was discovered after upgrading the CPUs and seeing almost no improvement.
Command to check CPU speed from SQLServerCentral:
PS> gwmi -class win32_Processor | SELECT CurrentClockSpeed, MaxClockSpeed
Update 6/13/2017: Warning: That number may report Current = Max however Task Manager may show a a different (much lower) number. Also the BIOS mode "Performance Per Watt Optimized (OS)" suddenly stopped working for us after some hardware changes. The CPU was locked at 50% max frequency, regardless of load Only "Performance" mode (loud fan mode) still achieved max CPU. We must be missing a firmware/software patch.

- 5,345
- 3
- 43
- 61
-
Btw: When these same adjustments were applied to the IIS Servers, it resulted in a measured ~30% faster calls overall. We did not delve into whether this was improving the tcp link with the DB vs the post processing required in C# by these particular calls. – crokusek Jan 21 '15 at 01:39
faulty hardware. I had a server once where the hard drive had some sectors that it was having difficulty reading and we replacing it had a noticeable performance increase.

- 1,761
- 2
- 15
- 23
Clauses to avoid (if possible) :
- DISTINCT / UNIQUE
- UNION
- GROUP BY
- ORDER BY

- 1,963
- 8
- 28
- 42