6

We have a specific query that runs a lot slower when inside a proc. I have to add here that it is enclosed within a two level-cursor. However, both cursors have an iteration result-set of one line.

Let me first state things that we tried and failed:

  • Avoiding parameter sniffing by using option (recompile) and option (optiimize for (@var UNKNOWN)
  • This thread. The variables that seems to be the problem are actually local ones and not proc parameters.

Here is the query as taken from inside the proc/cursors.

 select @tpdim1 = dim1, @tpdim2 = dim2, @typecalc = typecalc
    from loyalty_policy where code=@loop2_loyalty_policy

Note: @loop2_loyalty_policy is the var taken from the result of the inner cursor, and has one value. code is PK to the loyalty_policy table. Thus, @tpdim1 and @tpdim2 have a single value each.

SET STATISTICS PROFILE ON 
SET STATISTICS    xml on           
                  insert into @tbl_loyal_loop2 (cnt, store, map, pda, insdate, line, item, loyalty_policy_data, loyal_calc, loyalty_policy)
                  select @cnt, t.store, t.map, t.pda, t.insdate, t.line, t.item, ld.tab_id,  
                  case @typecalc
                        when 1 then convert(bigint,round(isnull(t.valueFromTran,0.00) * ld.value , 0 ) )
                        when 2 then convert(bigint,round(isnull(t.netvalue,0.00) * ld.value , 0 ) )
                        when 3 then convert(bigint,isnull(t.qty,0) * ld.value )
                        when 4 then convert(bigint,round(isnull(t.valueFromPrice2,0.00) * ld.value , 0 ) )
                        when 5 then convert(bigint,round(isnull(t.valueFromPrice3,0.00) * ld.value , 0 ) )
                        when 6 then convert(bigint,round(isnull(t.valueFromPrice4,0.00) * ld.value , 0 ) )
                  else 0 end
                  ,@loop2_loyalty_policy
                  from loyalty_policy_data ld-- with (index=ind_loyalty_policy_02)
                              inner join #tbl_data t on t.insdate >= ld.fdateactive and t.insdate <= ld.tdateactive
                  where ld.loyalty_policy = @loop2_loyalty_policy 
                  and ld.tdateactive >= @from_rundate and ld.fdateactive <= @to_rundate
                  and t.dbupddate > @loop1_dbupddate  
                  and
                        case when @tpdim1 is null then '' 
                        else  
                              case  @tpdim1 
                                    when 'STORE'            then t.store when 'BRAND' then t.brand  when 'CAT1' then t.cat1   when 'CAT2' then t.cat2   when 'CAT3' then t.cat3   when 'ITEM' then t.item    
                                    when 'CUSTGROUP'  then t.custgroup when 'CUSTGROUP2' then t.custgroup2 when 'CUSTGROUP3' then t.custgroup3
                                    when 'CUSTOMER'         then @customer
                              else '' end
                        end
                        = case when @tpdim1 is null then '' else ld.dim1 end
                  and 
                        case when @tpdim2 is null then '' 
                        else  
                              case  @tpdim2 
                                    when 'STORE'            then t.store when 'BRAND' then t.brand  when 'CAT1' then t.cat1   when 'CAT2' then t.cat2   when 'CAT3' then t.cat3   when 'ITEM' then t.item    
                                    when 'CUSTGROUP'  then t.custgroup when 'CUSTGROUP2' then t.custgroup2 when 'CUSTGROUP3' then t.custgroup3
                                    when 'CUSTOMER'         then @customer                     
                              else '' end
                        end
                        = case when @tpdim2 is null then '' else ld.dim2 end
SET STATISTICS    xml off    

The SET STATISTICS XML for the above returns this plan.

In trying to debug it, we isolated the query in the following form (here, you can also see how table #a is made, which has exactly the same data with the previous #tbl_data):

drop table #a;
select dt.dbupddate, dt.insdate, dt.map, dt.pda, pt.line, pt.item, 
( pt.exp_qty - pt.imp_qty)  as qty,  
( pt.exp_value + pt.imp_value )  as netvalue, 
( (document.exp_val - document.imp_val) * (pt.netvalue - pt.vat_value) )  as valueFromTran,  
( (document.exp_val - document.imp_val) * ( ( (pt.qty - pt.qty_gift) * isnull(pt.price2,0.00) ) * (1.00-( pt.disc_perc / 100)) ) ) as valueFromPrice2, 
( (document.exp_val - document.imp_val) * ( ( (pt.qty - pt.qty_gift) * isnull(pt.price3,0.00) ) * (1.00-( pt.disc_perc / 100)) ) ) as valueFromPrice3, 
( (document.exp_val - document.imp_val) * ( ( (pt.qty - pt.qty_gift) * isnull(pt.price4,0.00) ) * (1.00-( pt.disc_perc / 100)) ) ) as valueFromPrice4, 
dt.store, item.brand, item.cat1, item.cat2, item.cat3, customer.custgroup, customer.custgroup2, customer.custgroup3 
into #a
from document with (nolock) 
      inner join dt with (nolock) on dt.doccode = document.code 
      inner join store with (nolock) on store.code = dt.store and store.calc_loyal = 1 
      inner join customer with (nolock) on customer.code = dt.customer  
      inner join pt with (nolock) on dt.map = pt.map and dt.pda=pt.pda 
      inner join item with (nolock) on item.code = pt.item and item.itemtype in (select code from itemtype with (nolock) where vsales = 1)
where dt.canceled = 0 and document.is_opposite = 0 and document.type = 3 and dt.customer=N'EL4444444'
and dt.insdate >= '20180109' and dt.insdate <= '20190108' ;



SET STATISTICS PROFILE ON 
                  select t.store, t.map, t.pda, t.insdate, t.line, t.item, ld.tab_id,  
                  case 4
                        when 1 then convert(bigint,round(isnull(t.valueFromTran,0.00) * ld.value , 0 ) )
                        when 2 then convert(bigint,round(isnull(t.netvalue,0.00) * ld.value , 0 ) )
                        when 3 then convert(bigint,isnull(t.qty,0) * ld.value )
                        when 4 then convert(bigint,round(isnull(t.valueFromPrice2,0.00) * ld.value , 0 ) )
                        when 5 then convert(bigint,round(isnull(t.valueFromPrice3,0.00) * ld.value , 0 ) )
                        when 6 then convert(bigint,round(isnull(t.valueFromPrice4,0.00) * ld.value , 0 ) )
                  else 0 end
                  ,'003'
                  --select count(*)
                  from loyalty_policy_data ld with (index=ind_loyalty_policy_02)
                              inner join #a t on t.insdate >= ld.fdateactive and t.insdate <= ld.tdateactive
                  where ld.loyalty_policy = '003' 
                  --and ld.tdateactive >= '20180109' and ld.fdateactive <= '20190108'
                  and t.dbupddate > '20000101'
      and 
                        case when 'CUSTOMER' is null then '' 
                        else  
                              case  'CUSTOMER' 
                                    when 'STORE'            then t.store when 'BRAND' then t.brand  when 'CAT1' then t.cat1   when 'CAT2' then t.cat2   when 'CAT3' then t.cat3   when 'ITEM' then t.item    
                                    when 'CUSTGROUP'  then t.custgroup when 'CUSTGROUP2' then t.custgroup2 when 'CUSTGROUP3' then t.custgroup3
                                    when 'CUSTOMER'         then 'EL0134366'
                              else '' end
                        end
                        = case when 'CUSTOMER' is null then '' else ld.dim1 end
                  and 
                        case when 'BRAND' is null then '' 
                        else  
                              case  'BRAND' 
                                    when 'STORE'            then t.store when 'BRAND' then t.brand  when 'CAT1' then t.cat1   when 'CAT2' then t.cat2   when 'CAT3' then t.cat3   when 'ITEM' then t.item    
                                    when 'CUSTGROUP'  then t.custgroup when 'CUSTGROUP2' then t.custgroup2 when 'CUSTGROUP3' then t.custgroup3
                                    when 'CUSTOMER'         then 'EL0134366'

                              else '' end
                        end
                        = case when 'BRAND' is null then '' else ld.dim2 end
SET STATISTICS PROFILE off    

And here is the execution plan. This runs a LOT faster.

Why this humongous difference? From my limited knowledge of execution analyzing, I have noticed

  1. The first (slow) query, on the index spool operation, has an estimated rows of ~9700 but actual rows of 3 million.
  2. The second query has utilized many operations with parallelism
  3. The only "real" difference I can see in the second query is the hand-substituted values of the @tpdim1 and @tpdim2 values. Sure enough, when we went inside the first query's proc code, and replaced the @tpdim1 & @tpdim2 with the single values they should get, it ran as fast as the second query.

Could you please explain this difference and propose some advice to fix the procedure?


Edit: As Laughing Vergil recommended, I replaced the literals in the second query with variables previously declared, and again it run slow!


Edit 2: I have some additional info from doing some further research.

First, I have isolated the problem to this line:

case when @tpdim1 is null then '' <-- This uses the slow plan

case when 'CUSTOMER' is null then '' <-- This uses the fast plan

This is true in the ad-hoc query, no need to trouble ourselves with spcs and/or cursors.

This keeps hapenning even if I change the code to the recommended dynamic where structure.

I haven't created any sampla data yet, but the important info (as can be seen in the plans) is that loyalty_policy_data has about 720k lines if we only filter by loyalty_policy = @loop2_loyalty_policy. However, if we evaluate the @tpdim1 condition, which essentially is dim1=N'EL0134366', the rows returned are only 4.

The difference in the plan, then, is when this condition is evaluated in regards to the date-checking conditions.

In the fast plan, it gets evaluated first - When seeking the index for the loyalty policy value, it adds a (non-seek) predicate. While this predicate is not within the index, the returned rows are 4 and all the other operators have "logical" sizes.

In contrast, the slow plan painfully disregards this predicate until too late. If I've figured correctly, it makes a nested loops on loyalty_policy_data as the outer table (which is crazy). It passes the needed columns in as outer references. For each such tuple, the index spool scans the #table(~1k rows) and finds about 250 results, and passes that to the filter which finally does the tpdim1 filtering. Thus, 250*700k rows are passed to the filter operator.

So now I think I know what happens. But I can't figure why.

George Menoutis
  • 6,894
  • 3
  • 19
  • 43
  • why the index hint? – S3S Dec 03 '18 at 17:33
  • Just one of the numerous desperate attempts at solving this. You may ignore it, same things happen without it. (that changes the plan :s but i'm not at work atm to edit this) – George Menoutis Dec 03 '18 at 17:37
  • 2
    Test your external query using the same variables as your procedure uses. and assign the values separately from the variable definitions, then post the new plan. A plan with constant values is frequently significantly more efficient than a plan with variables. Also, especially if you have a case where a `varchar` is being compared with an `nvarchar` in an index, performance can suffer with a variable. Make sure the variables you declare match the type in the stored procedure exactly. – Laughing Vergil Dec 03 '18 at 17:55
  • Did you try solutions from here https://stackoverflow.com/questions/440944/sql-server-query-fast-but-slow-from-procedure – Mike Twc Dec 08 '18 at 04:06
  • Am I missing DDL+DML or is this a "guessing game"? Please provide us the full demo which we can execute and examine locally starting with DDL+DML (If you provided it then I missed it) – Ronen Ariely Dec 09 '18 at 21:00
  • This is not a demo, so DML would be impossible. I find it a bit harsh to label it "guessing game" when the execution plans are there. – George Menoutis Dec 10 '18 at 11:23
  • 1
    You're more likely to get to the people that can answer this effectively by asking over at the more specialised [dba.se](https://dba.stackexchange.com/) if you haven't already. – Bridge Dec 10 '18 at 11:46
  • @GeorgeMenoutis I did not asked for the real data but sample data to reproduce the issue. In any case I wish you good luck. Personally I will not dive into something in community forum, in this condition. In order to discuss performance and provide a better solution, I need to test the new solutions and your EP will not give me the EP of my other solutions. Instead of bounty points you could do the work to provide the information. with that being said, I am sure other people will provide you with what you are looking for. Again, good luck and have a great day `:-)` – Ronen Ariely Dec 10 '18 at 23:20

3 Answers3

4

after cleaning up the query for readability purposes, I have the following.

insert into @tbl_loyal_loop2 
( cnt, 
  store, 
  map, 
  pda, 
  insdate, 
  line, 
  item, 
  loyalty_policy_data, 
  loyal_calc, 
  loyalty_policy
)
select 
      @cnt, 
      t.store, 
      t.map, 
      t.pda, 
      t.insdate, 
      t.line, 
      t.item, 
      ld.tab_id,
      convert(bigint, round( coalesce(
         case @typecalc
               when 1 then t.valueFromTran
               when 2 then t.netvalue
               when 3 then t.qty
               when 4 then t.valueFromPrice2
               when 5 then t.valueFromPrice3
               when 6 then t.valueFromPrice4
               else 0 
            END,   0.00) * ld.value , 0 ) ),
      @loop2_loyalty_policy
   from 
      loyalty_policy_data ld  -- with (index=ind_loyalty_policy_02)
         inner join #tbl_data t 
            on t.insdate >= ld.fdateactive 
            and t.insdate <= ld.tdateactive
   where 
          ld.loyalty_policy = @loop2_loyalty_policy 
      and ld.tdateactive >= @from_rundate 
      and ld.fdateactive <= @to_rundate
      and t.dbupddate > @loop1_dbupddate  
      and (   @tpdim1 is null
           OR ld.dim1 = case @tpdim1
                           when 'STORE' then t.store 
                           when 'BRAND' then t.brand  
                           when 'CAT1' then t.cat1   
                           when 'CAT2' then t.cat2   
                           when 'CAT3' then t.cat3   
                           when 'ITEM' then t.item    
                           when 'CUSTGROUP' then t.custgroup 
                           when 'CUSTGROUP2' then t.custgroup2 
                           when 'CUSTGROUP3' then t.custgroup3
                           when 'CUSTOMER' then @customer
                           else ''
                          END )
      and (   @tpdim2 is null
           OR ld.dim2 = case when @tpdim1
                         when 'STORE' then t.store 
                         when 'BRAND' then t.brand  
                         when 'CAT1' then t.cat1
                         when 'CAT2' then t.cat2
                         when 'CAT3' then t.cat3
                         when 'ITEM' then t.item    
                         when 'CUSTGROUP' then t.custgroup 
                         when 'CUSTGROUP2' then t.custgroup2 
                         when 'CUSTGROUP3' then t.custgroup3
                         when 'CUSTOMER' then @customer
                         else '' 
                      END )

In addition, I would make sure you have a composite index on your loyalty_policy_data table... index on ( loyalty_policy, tdateactive, fdateactive, dbupddate, dim1, dim2 )

This way you are qualifying all the fields used in your WHERE filtering criteria. Don't rely on an index of just the key... but key PLUS the dates will help optimize the specific date range without having to go back to the raw data pages, but can optimize the query JOIN conditions based on values in the INDEX.

As for your temporary table #tbl_data, ensure you have an index on ( insdate ) as that is the only JOIN basis criteria (in case you did not already have an index on that table).

COMMENT --

From your comment about the slow vs fast query based on the null of

@tpdim1 = NULL vs 'CUSTOMER' = NULL

a fixed string 'CUSTOMER' is NEVER null, so it never has to consider it down the null path. Fixed string 'CUSTOMER' vs the @customer variable being null or the being compared to in the case/when of ld.dim1 and ld.dim2 respectively being compared to null... maybe what needs to be tested for should be changed from

  and (   @tpdim1 is null
               OR ld.dim1 = case @tpdim1
                               when 'STORE' then t.store 
                               when 'BRAND' then t.brand  ... end
     )

to

  and ld.dim1 = case @tpdim1
                when NULL then ''
                when 'STORE' then t.store 
                when 'BRAND' then t.brand  ... end

Same with the ld.dim2 case/when. Include the "NULL" as first tested value for the @tpdim1 (and @tpdim2) tests.

DRapp
  • 47,638
  • 12
  • 72
  • 142
  • Although your recommendation for optimizing are nice (I've already implemented the code proposed), your answer still does not shed light on whether the literal-valued query works while the one using parameters doesn't. Check Edit 2, I've added some info. – George Menoutis Dec 11 '18 at 14:32
  • @GeorgeMenoutis, see revised answer, comment area at bottom re: NULL and updating case condition. – DRapp Dec 11 '18 at 15:00
  • Hmm, I'll have to try this tomorrow. Bet it still will show the same behaviour because (as I failed to note): Replacing the @ with the literal on the `@tpdim1 is null` doesn't change anything - replacing on `case @tpdim1` is where the change is observed. – George Menoutis Dec 11 '18 at 15:05
2

To answer your question:

A clear and reproducible explanation of how and why the query analyzer behaves differently in those cases

The query optimizer behaves differently in those cases, because the plan with variables must be valid for any possible future value of parameters, so optimiser generates a complicated generic plan that would produce correct results even when parameters are NULL.

The plan with literals (not variables) is usually more efficient, because optimiser can greatly simplify your CASE logic during the plan compilation phase. Optimiser has a better chance to choose the optimal plan shape, because it is easier for the optimiser to take into account available information about indexes and cardinality estimates when the query is simpler and filters have known values.


Martin Smith pointed out in the comment that you are using the server version 10.0.2531.0, which is 2008 SP1 and which does not have the parameter embedding optimization enabled. You would need at least SP1 CU5 on that branch for the OPTION (RECOMPILE) to work properly (as I expected it to work in the explanation below).

Erland Sommarskog also talks about it in his article mentioned below. He says that you need to be on at least SP2.

If you can't update the server, check out the older version of the Erland's article Dynamic Search Conditions in T‑SQL Version for SQL 2005 and Earlier to see how to deal with this situation when proper OPTION (RECOMPILE) is not available.


Here is my original answer.

I know that you said that you tried it, but I'd still ask you to double-check. Looking at your symptoms OPTION (RECOMPILE) should help.

You need to add this option to the main query. Not to the whole stored procedure. Like this:

insert into @tbl_loyal_loop2 (cnt, store, map, pda, insdate, line, item, loyalty_policy_data, loyal_calc, loyalty_policy)
select @cnt, t.store, t.map, t.pda, t.insdate, t.line, t.item, ld.tab_id,  
case @typecalc
    when 1 then convert(bigint,round(isnull(t.valueFromTran,0.00) * ld.value , 0 ) )
    when 2 then convert(bigint,round(isnull(t.netvalue,0.00) * ld.value , 0 ) )
    when 3 then convert(bigint,isnull(t.qty,0) * ld.value )
    when 4 then convert(bigint,round(isnull(t.valueFromPrice2,0.00) * ld.value , 0 ) )
    when 5 then convert(bigint,round(isnull(t.valueFromPrice3,0.00) * ld.value , 0 ) )
    when 6 then convert(bigint,round(isnull(t.valueFromPrice4,0.00) * ld.value , 0 ) )
else 0 end
,@loop2_loyalty_policy
from loyalty_policy_data ld -- with (index=ind_loyalty_policy_02)
            inner join #tbl_data t on t.insdate >= ld.fdateactive and t.insdate <= ld.tdateactive
where ld.loyalty_policy = @loop2_loyalty_policy 
and ld.tdateactive >= @from_rundate and ld.fdateactive <= @to_rundate
and t.dbupddate > @loop1_dbupddate  
and
    case when @tpdim1 is null then '' 
    else  
            case  @tpdim1 
                when 'STORE'            then t.store when 'BRAND' then t.brand  when 'CAT1' then t.cat1   when 'CAT2' then t.cat2   when 'CAT3' then t.cat3   when 'ITEM' then t.item    
                when 'CUSTGROUP'  then t.custgroup when 'CUSTGROUP2' then t.custgroup2 when 'CUSTGROUP3' then t.custgroup3
                when 'CUSTOMER'         then @customer
            else '' end
    end
    = case when @tpdim1 is null then '' else ld.dim1 end
and 
    case when @tpdim2 is null then '' 
    else  
            case  @tpdim2 
                when 'STORE'            then t.store when 'BRAND' then t.brand  when 'CAT1' then t.cat1   when 'CAT2' then t.cat2   when 'CAT3' then t.cat3   when 'ITEM' then t.item    
                when 'CUSTGROUP'  then t.custgroup when 'CUSTGROUP2' then t.custgroup2 when 'CUSTGROUP3' then t.custgroup3
                when 'CUSTOMER'         then @customer                     
            else '' end
    end
    = case when @tpdim2 is null then '' else ld.dim2 end
OPTION(RECOMPILE);

OPTION (RECOMPILE) is not so to alleviate the parameter sniffing, but to allow the optimiser to inline the actual values of parameters into the query. This gives optimiser freedom to simplify the query logic.

The type of your query looks like Dynamic Search Conditions and I highly recommend to read that article by Erland Sommarskog.

Also, instead of

and
    case when @tpdim1 is null then '' 
    else  
            case  @tpdim1 
                when 'STORE'            then t.store when 'BRAND' then t.brand  when 'CAT1' then t.cat1   when 'CAT2' then t.cat2   when 'CAT3' then t.cat3   when 'ITEM' then t.item    
                when 'CUSTGROUP'  then t.custgroup when 'CUSTGROUP2' then t.custgroup2 when 'CUSTGROUP3' then t.custgroup3
                when 'CUSTOMER'         then @customer
            else '' end
    end
    = case when @tpdim1 is null then '' else ld.dim1 end

I'd write it a bit differently:

and
(
    @tpdim1 is null
    OR
    (
            ld.dim1 =
            case @tpdim1
                when 'STORE'      then t.store 
                when 'BRAND'      then t.brand  
                when 'CAT1'       then t.cat1   
                when 'CAT2'       then t.cat2   
                when 'CAT3'       then t.cat3   
                when 'ITEM'       then t.item    
                when 'CUSTGROUP'  then t.custgroup 
                when 'CUSTGROUP2' then t.custgroup2 
                when 'CUSTGROUP3' then t.custgroup3
                when 'CUSTOMER'   then @customer
                else ''
            end
    )
)

With OPTION (RECOMPILE) when @tpdim1 has a value of CUSTOMER and @customer has a value of EL0134366 optimiser should transform this statement into a simple

and
(
    ld.dim1 = `EL0134366`
)

and then it would be able to use a suitable index or estimate the number of rows way more accurately and make a better decision on the plan shape. With this option the plan would be valid only for this specific value of the parameter.

Note, that option (optimize for UNKNOWN) can't help here. optimize for UNKNOWN would have to generate a generic plan that is valid for any possible value of parameters.

Vladimir Baranov
  • 31,799
  • 5
  • 53
  • 90
  • I like the article you recommended, I actually have read it already and recommend it too in my posts :) But I really saw no change in the execution plan, I'd placed the option (recompile) at the right place. I'll see whether changing the criterio works. – George Menoutis Dec 07 '18 at 07:52
  • @GeorgeMenoutis, well, then there is something more complicated going on. I'm surprised that `option(recompile)` doesn't change the execution plan. In this case it means that the difference in performance between the query with variables and with literals is not due to using the literals, but somewhere deeper. – Vladimir Baranov Dec 07 '18 at 23:34
  • 2
    The OP is on build `10.0.2531.0` this is 2008 SP1 and does not have the parameter embedding optimization enabled. It would need [at least CU5 on that branch](https://blogs.msdn.microsoft.com/grahamk/2009/11/18/changed-behaviour-of-option-recompile-syntax-in-sql-server-2008-sp1-cumulative-update-5/) – Martin Smith Dec 11 '18 at 13:47
  • I added a bunch of info, if you want check after "Edit 2" – George Menoutis Dec 11 '18 at 14:18
  • @MartinSmith, great catch. I didn't bother to look at the plan XML and verify the server version. I think you should post it as an answer, because it explains everything. The [Dynamic Search Conditions](http://www.sommarskog.se/dyn-search.html) article talks in the very beginning about the required version and how `option(recompile)` wasn't working properly originally. OP said that he read it, so I deduced there must be something else going on. – Vladimir Baranov Dec 11 '18 at 22:14
  • 1
    I totally agree with you. I've now read and understood the query plan, and deduced that what I need is to *disable* the auto-parameterization. As Sommerskag's article states, option(recompile) indeed does that...on later versions. @MartinSnith as Vladimir notes, I consider this to be the answer. If you formulate one, I'll accept it. – George Menoutis Dec 12 '18 at 08:12
0

Generally speaking, Query with literal value is faster than query with proc parameter or local variable.

When literal value is use then Optimizer will make Special Plan just for that value if Forced Parameterization not turned on

Optimizer can also make Trivial Plan or Simple Parameterize Plan but in your case this is not true.

When you use a parameter, the optimizer will create a plan for that parameter which is called Parameter Sniffing, and then reuse that plan.

Option Recompile is one way to overcome this problem :Create plan for each different variable value,in order to maintain `cardinality estimate'. This is very short

Therefore query with literal value will always be faster.

Let me first state things that we tried and failed:

•Avoiding parameter sniffing by using option (recompile) and option (optiimize for (@var UNKOWN)

•This thread. The variables that seems to be the problem are actually local ones and not proc parameters.

You fail because your query is very poorly written (with due respect).

Do not use cursor. It seem that cursor can be avoided in your case

Post complete proc query with variable parameter ,because logic to get value in @loop2_loyalty_policy etc is not clear.This will help in giving correct suggesation "to avoid cursor".

case when @tpdim1 is null : This complete logic can be created and inserted in Temp table itself,such that the new column is straight away use in join.Hope you are able to understand my idea and language.

1.The first (slow) query, on the index spool operation, has an estimated rows of ~9700 but actual rows of 3 million.

Because of High Cardinality Estimate by optmizer,in case of wrong join

I am not sure, if this will certainly improve your query and Cardinality Estimate because I have not 100% understood your query.

But changing join condition often help like,

Read carefully here, I am not sure what data is there in column loyalty_policy and t.insdate.It do not appear that you need such complicated join like below.

In case you really need then you may alter join condition like below once.

from loyalty_policy_data ld with (nolock)
 inner join #tbl_data t on ld.loyalty_policy = @loop2_loyalty_policy
 and ld.tdateactive >= @from_rundate and ld.fdateactive <= @to_rundate 
and t.insdate >= ld.fdateactive and t.insdate <= ld.tdateactive
                  where t.dbupddate > @loop1_dbupddate 

Main goal is to avoid cursor.

KumarHarsh
  • 5,046
  • 1
  • 18
  • 22
  • You say many interesting things. However, they do not seem to be an answer to this. The main reason is that the literal value query already runs fine (check edit #2, I've updated the content). – George Menoutis Dec 11 '18 at 14:31