2

I have a few queries which were running fine for a long time in 12c but post upgrade to 18c they have started to take exponential more time. The DB parameters are same for both 12c and 18c DB. One thing I could see in OEM was that there is a lot of difference in between Estimated and Actual Rows

enter image description here

I have gathered stats on tables using, but still issue persists.

dbms_stats.gather_table_stats(ownname => USER, tabname => 'XYZ',
method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,
cascade => true,estimate_percent => 15);

Both DB's are running on same data in tables.

Infinite
  • 704
  • 8
  • 27
  • Maybe access plan is different ? maybe the data distribution is different now so statistics are different. you can start tune some queries and understand what is the issue. reading wrong index ? full table scan ? how is the query is re written. you can show us some query example with query plan and its data . we need more information in order to help you – Moudiz Dec 04 '19 at 14:46
  • 1
    Why `for all indexed columns size skewonly`? Any columns used in joins or filters that don’t happen to be indexed (I’m not saying they should be) won’t get stats gathered on them, so it seems an odd choice. – William Robertson Dec 04 '19 at 15:06
  • Do you have diagnostics + tuning pack license? If so, you can run the queries through the SQL Tuning Advisor. Also, check the application build to see if all indexes are present that are supposed to be there. Could also be that stats were fixed or different in 12c. Usually, when this happens, it is an execution plan change that is at the root cause [e.g. missing indexes, bad/different stats, differences in the optimizer, ...] – Roger Cornejo Dec 04 '19 at 15:22

2 Answers2

1

In the good old days an advanced tuning method was invented called Tuning by Cardinality Feedback - the main idea of the author (Wolfgang Breitling) was to compare the cardinality (number of rows) estimated by the Optimizer with the actual number of rows got from the data source. If those figures are equal or close together everything is fine, but a big difference is a sign of troubles. For example for estimated count 1 an index access is fine, but index access is a catastrophic scenario for actual count 1M.

This method was so successful, that it was implemented in the OEM query overview (you are using).

See 4th and 7th columns in your execution plan: estimated rows and actual rows. Go bottom up and check the difference, the 3rd line from bottom has already a massive difference 1 to 1M - this is your first problem.

You must check the access and filter predicates for this line and investigate why this cardinality mismatch was produced (See here how to get the full explain plan with the predicate information).

In most cases the cause is missing or wrong optimizer object statistics.

It could be of course, that the cardinality estimation algorithm was changed in the upgrade and you must adjust the statistics, that were fine in the former version - let us know!

Marmite Bomber
  • 19,886
  • 4
  • 26
  • 53
0

The long running procedure had multiple queries, the upgrade affected couple of the queries.

The below steps have resolved the issue

1) Gather Latest stats

Initially post upgrade I have gathered stats with

dbms_stats.gather_table_stats(ownname => USER, tabname => 'XYZ',
method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,
cascade => true,estimate_percent => 15);

From William's comment I explored method_opt parameter more and chosing the method opt 'for all indexed columns size skewonly' is not the right strategy here so instead I have gone with Oracle's default 'F FOR ALL COLUMNS SIZE AUTO' and also removed estimate_percent so that it picks Default

dbms_stats.gather_table_stats(ownname => USER, tabname => 'XYZ',
granularity => 'ALL', degree => 8 ,
cascade => true);

2) Another query which was using an inline view and this view was embedded in a lengthy/complex query this shows a lag while reading from Temp space , not sure if this is due to some parameter change or due to some algorithm change with 18c but to bypass this have broken the query into smaller chunks and used global temporary table to insert data of inline view as first step and then instead of inline view read data from GTT this has solved the slow running issue.

If you see query slowness/plan flips post an Oracle upgrade two broad steps I could advise is a) Check the parameter settings if there are any obvious parameter changes that might have happened in DB which causes the Optimizer to work in a different way

b) Gather complete table/schema stats.

Infinite
  • 704
  • 8
  • 27