14

I am developing a DWH on Oracle 11g. We have some big tables (250+ million rows), partitioned by value. Each partition is a assigned to a different feeding source, and every partition is independent from others, so they can be loaded and processed concurrently.

Data distribution is very uneven, we have partition with millions rows, and partitions with not more than a hundred rows, but I didn't choose the partitioning scheme, and by the way I can't change it.

Considered the data volume, we must assure that every partition has always up-to-date statistics, because if the subsequent elaborations don't have an optimal access to the data, they will last forever.

So for each concurrent ETL thread, we

  1. Truncate the partition
  2. Load data from staging area with

SELECT /*+ APPEND */ INTO big_table PARTITION(part1) FROM temp_table WHERE partition_colum = PART1

(this way we have direct path and we don't lock the whole table)

  1. We gather statistics for the modified partition.

In the first stage of the project, we used the APPROX_GLOBAL_AND_PARTITION strategy and worked like a charm

 dbms_stats.gather_table_stats(ownname=>myschema,
                              tabname=>big_table,
                              partname=>part1,
                              estimate_percent=>1,
                              granularity=>'APPROX_GLOBAL_AND_PARTITION',
                              CASCADE=>dbms_stats.auto_cascade,
                              degree=>dbms_stats.auto_degree) 

But, we had the drawback that, when we loaded a small partition, the APPROX_GLOBAL part was dominant (still a lot faster than GLOBAL) , and for a small partition we had, e.g., 10 seconds of loading, and 20 minutes of statistics.

So we have been suggested to switch to the INCREMENTAL STATS feature of 11g, which means that you don't specify the partition you have modified, you leave all parameters in auto, and Oracle does it's magic, automatically understanding which partition(s) have been touched. And it actually works, we have really speeded up the small partition. After turning on the feature, the call became

 dbms_stats.gather_table_stats(ownname=>myschema,
                              tabname=>big_table,
                              estimate_percent=>dbms_stats.auto_sample_size,
                              granularity=>'AUTO',
                              CASCADE=>dbms_stats.auto_cascade,
                              degree=>dbms_stats.auto_degree) 

notice, that you don't pass the partition anymore, and you don't specify a sample percent.

But, we're having a drawback, maybe even worse that the previous one, and this is correlated with the high level of parallelism we have.

Let's say we have 2 big partition that starts at the same time, they will finish the load phase almost at the same time too.

  1. The first thread ends the insert statement, commits, and launches the stats gathering. The stats procedure notices there are 2 partition modified (this is correct, one is full and the second is truncated, with a transaction in progress), updates correctly the stats for both the partitions.

  2. Eventually the second partition ends, gather the stats, it see all partition already updated, and does nothing (this is NOT correct, because the second thread committed the data in the meanwhile).

The result is:

PARTITION NAME | LAST ANALYZED        | NUM ROWS | BLOCKS | SAMPLE SIZE
-----------------------------------------------------------------------
PART1          | 04-MAR-2015 15:40:42 | 805731   | 20314  | 805731
PART2          | 04-MAR-2015 15:41:48 | 0        | 16234  | (null)

and the consequence is that I occasionally incur in not optimal plans (which mean killing the session, refresh manually the stats, manually launch the precess again).

I tried even putting an exclusive lock on the gathering, so no more than one thread can gather stats on the same table at once, but nothing changed.

IMHO this is an odd behaviour, because the stats procedure, the second time it is invoked, should check for the last commit on the second partition, and should see it's newer than the last stats gathering time. But seems it's not happening.

Am I doing something wrong? Is it an Oracle bug? How can I guarantee that all stats are always up-to-date with incremental stats feature turned on, and an high level of concurrency?

Lalit Kumar B
  • 47,486
  • 13
  • 97
  • 124
Francesco
  • 161
  • 1
  • 10
  • Take a look at `DBA_TAB_PARTITIONS` and `begin dbms_stats.flush_database_monitoring_info; end;`. There seems to be some dirty reads going on with table modification data - multiple sessions see each others uncommitted inserts and one session can clear that data for all sessions. I don't think this is specific to incremental statistics. It may help to insert and delete one dummy row right before calling `DBMS_STATS`, to help force gathering. I'd like to post more but I don't have enough time for a full answer right now. – Jon Heller Mar 05 '15 at 06:40
  • unfortunately I have no DBA privileges and I an mot even the table owner. the docs, btw, say that "Because the GATHER_*_STATS procedures internally flush monitoring information, it is not necessary to run this procedure before gathering the statistics." i am tryng to gather only partition level stats (which works) and, refresh global once in a while, to circumvent the issue, until i can find a definitive solution. – Francesco Mar 05 '15 at 11:45
  • You should offer more details. When starts exactly your insert? before or after the suspected stealling stats insert starts? which of them finish first the insert? hou much time is taking the commits? how much time is taking the stealing stats stats gathering? Maybe the collect on first partition is taking place while the second session is commiting? – Florin Ghita Mar 20 '15 at 07:46
  • The collect on first partition starts while the second is still inserting. – Francesco Mar 24 '15 at 08:39
  • Did you turn on the Incremental maintenance feature for the table? like `EXEC DBMS_STATS.SET_TABLE_PREFS('','BIG_TABLE','INCREMENTAL','TRUE');` – Anjan Biswas Mar 24 '15 at 21:41

6 Answers6

2

I managed to reach a decent compromise with this function.

PROCEDURE gather_tb_partiz(
    p_tblname IN VARCHAR2,
    p_partname IN VARCHAR2)
IS
  v_stale all_tab_statistics.stale_stats%TYPE;
BEGIN
  BEGIN
    SELECT stale_stats
    INTO v_stale
    FROM user_tab_statistics
    WHERE table_name = p_tblname
    AND object_type = 'TABLE';
  EXCEPTION
  WHEN NO_DATA_FOUND THEN
    v_stale := 'YES';
  END;
  IF v_stale = 'YES' THEN
    dbms_stats.gather_table_stats(ownname=>myschema, 
                                  tabname=> p_tblname,
                                  partname=>p_partname,
                                  degree=>dbms_stats.auto_degree,
                                  granularity=>'APPROX_GLOBAL AND PARTITION') ;
  ELSE
    dbms_stats.gather_table_stats(ownname=>myschema,
                                 tabname=>p_tblname,
                                 partname=>p_partname,
                                 degree=>dbms_stats.auto_degree,
                                 granularity=>'PARTITION') ;
  END IF;
END gather_tb_partiz;

At the end of each ETL, if the number of added/deleted/modified rows is low enough not to mark the table as stale (10% by default, can be tuned with STALE_PERCENT parameter), I collect only partition statistics; otherwise i collect global and partition statistics.

This keeps ETL of small partition fast, because no global partition must be regathered, and big partition safe, because any subsequent query will have fresh statistics and will likely use an optimal plan.

Incremental stats is anyway enabled, so whenever the global has to be recalculated, it is pretty fast because aggregates partition level statistics and does not perform a full scan.

I am not sure if, with incremental enabled, "APPROX_GLOBAL AND PARTITION" and "GLOBAL AND PARTITION" do differ in something, because both incremental and approx do basically the same thing: aggregate stats and histograms without doing a full scan.

Francesco
  • 161
  • 1
  • 10
1

Have you tried to have incremental statistics on, but still explicitly name a partition to analyze?

 dbms_stats.gather_table_stats(ownname=>myschema,
                              tabname=>big_table,
                              partname=>part,
                              degree=>dbms_stats.auto_degree);
Kombajn zbożowy
  • 8,755
  • 3
  • 28
  • 60
  • I tried, and, considering the time spent, it seems that this behaves like "GLOBAL AND PARTITION". But i will double check again. – Francesco Mar 24 '15 at 06:52
  • No, i was wrong. Unspecifying granularity defaults to AUTO, which overrides the presence of partname. He is collecting stats for all touched partition even if partname is specified, I just verified it. – Francesco Mar 24 '15 at 08:23
1

For your table, stale (yesterday's) global stats are not as harmful as completely invalid partition stats (0 rows). I can propose 2 a bit alternative approaches that we use:

  • Have a separate GLOBAL stats gathering executed by your ETL tool right after all partitions are loaded. If it's taking too long, play with estimate_percent as dbms_stats.auto_degree will likely to be more than 1%
  • Gather the global (as well as all other stale) stats in a separate database job run later during the day, after all data is loaded into DW.

The key point is that stale statistics which differ only slightly from fresh are almost just as good. If statistics show you 0 rows, they'll kill any query.

ms32035
  • 159
  • 4
  • I adopted a similar solution. After each ETL loading, I check if the global stats are stale, if they are, i gather partition and global, otherwise only partition. This is providing nice performance, because when I load small partitions, which does not alter more than 10% of the total row numer, there's no need to update global as well. – Francesco Mar 24 '15 at 08:12
1

Considering what you are trying to achieve, you need to run stats on specific intervals of time for all Partitions and not at the end of the process that loads each partition. It could be challenging if this is a live table and has constant data loads happening round the clock, but since these are LARGE DW tables I really doubt that's the case. So the best bet would be to collect stats at the end of loading all partitions, this will ensure that the statistics is collected for partitions where data has change or statistics are missing and update the global statistics based on the partition level statistics and synopsis.

However to do so, you need to turn on incremental feature for the table (11gR1).

EXEC DBMS_STATS.SET_TABLE_PREFS('<Owner>','BIG_TABLE','INCREMENTAL','TRUE');

At the end of every load, gather table statistics using GATHER_TABLE_STATS command. You don't need to specify the partition name. Also, do not specify the granularity parameter.

EXEC DBMS_STATS.GATHER_TABLE_STATS('<Owner>','BIG_TABLE');

Anjan Biswas
  • 7,746
  • 5
  • 47
  • 77
  • I cannot follow your solution. As I said in the original post, every partition of each table is independant from others, and we have an high degree of parallelism. As soon as partitionA is loaded in table1, the data starts to be processed and moves forth to table1 (with has the same partitionA). Meanwhile, we could receive partitionB, which has to be loaded and processed concurrently in the same way. I cannot wait that table1 is fully loaded and analyzed, I would waste too much time. – Francesco Mar 25 '15 at 14:34
  • well, I think you should consider collecting stats if they go stale. Collecting stats on every partition after each load every time seems a bit of an overkill for me unless you are in 9i or something or are trying to compensate for code that processes that data and is poorly written. – Anjan Biswas Mar 26 '15 at 05:58
0

Kindly check if you have used DBMS_STATS to set table preference to gather incremental statistics.This oracle blog explains that statistics will be gathered after each row affected.

Incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table

BEGIN 
DBMS_STATS.SET_TABLE_PREFS(myschema,'BIG_TABLE','INCREMENTAL','TRUE'); 
END;
psaraj12
  • 4,772
  • 2
  • 21
  • 30
-1

I'm a bit rusty about it, so first of all a question: did you try serializing partition loading? If so, how long and how well does statistics run? Notice that since loading time is so much smaller than statistics gathering, i guess this could also act as a temporary workaround.

Append hint does affects redo size, meaning the transaction just traces something, thus statistics may not reckon new data: http://oracle-base.com/articles/misc/append-hint.php

Thinking out loud: since the direct path insert does append rows at the end of the partition and eventually updates metadata at the end, the already running thread gathering statistics could have read non-updated (stale) data. Thus it may not be a bug, and locking threads would accomplish nothing.

You may test this behaviour temporarily switching your table/partition to LOGGING, for instance, and see how it works (slower, of course, but it's a test). Can you do it?

EDIT: incremental stats should work anyway, even disabling a parallel statistics gathering, since it reiles on the incremental values no matter how they were collected: https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics

  • I *in some way* tried serializing partition loading. I was doing (and i was wrong) a +APPEND without specifying partition, this caused the whole table to be locked, so actually no more than one thread could load data on the same table at the same time. But since we have process of 16+ hours, serializing (with 10 concurrent thread), serialize them is not a viable way. We have quite strict deadlines for our processes. About the LOGGING, for what i can see, all the tables are already in LOGGING mode, and i cannot change it in any way. – Francesco Mar 04 '15 at 18:09