2

I built a stored procedure which finds the shortest route in a company with many warehouses. The procedure is heavily used by the system. After many optimizations, including memory optimized tables, if I run sp_BlitzCache, I have 2 warnings:

  1. Parameter Sniffing;
  2. You have 83866 total plans in your cache, with 100.00% plans created in the past 24 hours, 100.00% created in the past 4 hours, and 100.00% created in the past 1 hour.

The procedure is called only with parameters (no hardcoded values). You can find here a query plan: https://www.brentozar.com/pastetheplan/?id=r1rW59QvN.

I really don't understand why the engine generates for each execution a plan and what I should check. Also, sp_BlitzCache says We couldn't find a plan for this query. Possible reasons for this include dynamic SQL, RECOMPILE hints, and encrypted code. I don't have either dynamic SQL, recompile hints nor encrypted code.

What should I check?

UPDATE

I checked the impact of this procedure on the server as Jeroen Mostert suggested in the comments (thank you for your response!) and this procedure uses 0.66% of top 10 procedures on the server regarding average IO, but is the only procedure without a plan, so the question still remains: why so many plans?

I haven't tried to run the stored procedure without the memory optimized tables. I had to use them because the proc needs to run as fast as lightning (and runs in less then 30 milliseconds in some cases). Also, I don't have the guts the procedure without the tables memory optimized, because I cannot reproduce the workload on test servers.

Thank you!

Nițu Alexandru
  • 714
  • 1
  • 11
  • 33
  • Have you considered performing the shortest path calculation in your application instead of the the database? Than you would be able to use existing graph libraries. – Magnus Mar 11 '19 at 08:39
  • It is not actually a geographical route calculations, but a schedule with cars leave and arrival hours, source and destinations, so only a select with a few filters and an order by. – Nițu Alexandru Mar 11 '19 at 08:43
  • Have you updated statistics on your tables? How often is it executed? Try to clear cache for your procedure(https://www.sqlskills.com/blogs/glenn/eight-different-ways-to-clear-the-sql-server-plan-cache/), update statistics and run one more time/ – Meow Meow Mar 11 '19 at 09:12
  • The most important 2 tables in the select are memory optimized. This tables contains cached data generated at a given time. When the time comes, the data is replaced (truncate and insert). No other delete or update. I don't think the statistics are the problem. Also, starting with SQL Server 2016, the statistics are updated automatically when compatibility level of at least 130 for memory optimized tables. When I clear cache, hundreds plans are generated every minute. The procedure is called 50-100 000 times a day. – Nițu Alexandru Mar 11 '19 at 11:08
  • 1
    Does the same thing happen if those two tables are *not* memory-optimized? Mixing memory-optimized tables with disk-based tables is a scenario that still doesn't work quite as smoothly as using only one type of object (while using only natively compiled stored procedures for the memory-based tables). Also, try things with the server option "optimize for ad-hoc workloads" enabled if it's not already -- this reduces one-off plan caching to stubs, which will not cut down on compilations, but may reduce their impact (and is almost never bad in and of itself). – Jeroen Mostert Mar 12 '19 at 15:23
  • 1
    Another thing worth exploring is `OPTION (KEEP[FIXED] PLAN)` for the parts of the procedure that are suspected sources of recompilation (again, the memory-optimized tables spring to mind). Obviously, some care needs to be taken to ensure this does not cache an inappropriate plan, but it can cut down considerably on unnecessary compilations. (I don't know if it has any effect with memory-optimized tables, I've only ever used it for disk-based tables.) – Jeroen Mostert Mar 12 '19 at 15:27
  • 1
    Last but not least: check if this is actually measurably impacting your server. Although recompilations are not *good*, if the work done by your stored procedure is considerably more than the work needed for recompilation (which tends to be the case) the excess plans may be survivable, if not optimal. Plan stubs can keep down the memory use, while the CPU cost can just be swallowed. – Jeroen Mostert Mar 12 '19 at 15:35
  • If you can't test any changes on test servers, you can still test them on the live servers (up to a point) -- check out `DBCC CLONEDATABASE`. I don't know if `CLONEDATABASE` works with memory-optimized tables -- if it does not, you can still work with things by switching the copy read-write (not officially supported, but still works if you don't tell anyone) and creating some tables yourself. You can't duplicate the workload that way, but you can get a fairly accurate indication of what changes in the stored procedure affect plans. (Speaking of which, also look into Query Store.) – Jeroen Mostert Mar 13 '19 at 06:14

0 Answers0