Oracle DBA – A lifelong learning experience

Speeding up the gathering of incremental stats on partitioned tables

Posted by John Hallas on January 4, 2012

11G introduced incremental  global stats and the table WRI$_OPTSTAT_SYNOPSIS$ contains synopsis data for use in maintaining the global statistics. This table can grow very large and Robin Moffat has produced a good blog  post about  the space issues  - note we both worked at the same site so it is the same DW being discussed by both of us.

Apart from the space usage that Robin refers to, another worrying aspect is the time taken when gathering stats on a partitioned table and most of that time is taken by running a delete statement

DELETE
FROM SYS.WRI$_OPTSTAT_SYNOPSIS$
WHERE SYNOPSIS# IN
(SELECT H.SYNOPSIS#
FROM SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$ H
WHERE H.BO# = :B1
AND H.GROUP# NOT IN
(SELECT T.OBJ# * 2
FROM SYS.TABPART$ T
WHERE T.BO# = :B1
UNION ALL
SELECT T.OBJ# * 2
FROM SYS.TABCOMPART$ T
WHERE T.BO# = :B1))

I will demonstrate the problem and a simple solution and you will be able to see the significant performance improvements achieved.

I first of all enable tracing against a gather statsjob using the standard database configuration.

SQL ID: 7q4x1r46xgq0a
Plan Hash: 0
BEGIN dbms_stats.gather_table_stats('BIA_RTL','DWB_RTL_SL_RETRN_LINE_ITEM',
  granularity=>'AUTO',cascade=>false,degree=>dbms_stats.auto_degree); END;

VERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      162      0.11       0.05          0         27         50           0
Execute   1054     27.84    1023.64     254884     553622       1562         512   ***************************
Fetch      868      3.81       4.18         28      97431          0         928
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2084     31.76    1027.88     254912     651080       1612        1440

Misses in library cache during parse: 84
Misses in library cache during execute: 58

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                    254912        1.01        995.82
  unspecified wait event                     254912        0.04          7.09
SQL ID: 1zdkwckmyutrd
Plan Hash: 413388401
DELETE FROM SYS.WRI$_OPTSTAT_SYNOPSIS$
WHERE
 SYNOPSIS# IN (SELECT H.SYNOPSIS# FROM SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$ H
  WHERE H.BO# = :B1 AND H.GROUP# NOT IN (SELECT T.OBJ# * 2 FROM SYS.TABPART$
  T WHERE T.BO# = :B1 UNION ALL SELECT T.OBJ# * 2 FROM SYS.TABCOMPART$ T
  WHERE T.BO# = :B1 ))
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1     26.50    1022.20     254877     462319          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     26.51    1022.20     254877     462319          0           0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  DELETE  WRI$_OPTSTAT_SYNOPSIS$ (cr=462319 pr=254877 pw=0 time=0 us)
      0   FILTER  (cr=462319 pr=254877 pw=0 time=0 us)
      0    HASH JOIN  (cr=462319 pr=254877 pw=0 time=0 us cost=31972 size=519892260 card=17329742)
  49312     TABLE ACCESS FULL WRI$_OPTSTAT_SYNOPSIS_HEAD$ (cr=819 pr=0 pw=0 time=0 us cost=311 size=1180632 card=49193)
16953751     INDEX FULL SCAN I_WRI$_OPTSTAT_SYNOPSIS (cr=461500 pr=254877 pw=0 time=86847 us cost=31546 size=103979388 card=17329898)(object id 478)
      0    UNION-ALL  (cr=0 pr=0 pw=0 time=0 us)
      0     TABLE ACCESS BY INDEX ROWID TABPART$ (cr=0 pr=0 pw=0 time=0 us cost=3 size=12 card=1)
      0      INDEX RANGE SCAN I_TABPART_BOPART$ (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)(object id 566)
      0     TABLE ACCESS BY INDEX ROWID TABCOMPART$ (cr=0 pr=0 pw=0 time=0 us cost=9 size=48 card=4)
      0      INDEX RANGE SCAN I_TABCOMPART_BOPART$ (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=368)(object id 588)
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                    254877        1.01        995.53
  unspecified wait event                     254877        0.01          7.05
********************************************************************************

Th main highlights of  this trace are the 1022 seconds run the delete  in sql_id 1zdkwckmyutrd and the big wait event is on db file sequential read   -  995 seconds

We now alter the WRI$_OPTSTAT_SYNOPSIS$ table so that it uses parallelism and note the difference.

SQL ID: 7q4x1r46xgq0a
Plan Hash: 0
BEGIN dbms_stats.gather_table_stats('BIA_RTL','DWB_RTL_SL_RETRN_LINE_ITEM',
  granularity=>'AUTO',cascade=>false,degree=>dbms_stats.auto_degree); END;
 
  OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    368      9.32      38.21          1    1055398          0           0   ***************************</span>
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      369      9.32      38.21          1    1055398          0           0

Misses in library cache during parse: 0

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PX Deq: Msg Fragment                            1        0.00          0.00
  cursor: pin S wait on X                         5        0.01          0.07
  PX Deq: Execution Msg                        9272        0.03         20.49
  db file sequential read                         1        0.00          0.00
  unspecified wait event                          1        0.07          0.07
  resmgr:cpu quantum                             23        0.00          0.05
  PX qref latch                                   8        0.00          0.00
 
  SQL ID: 1zdkwckmyutrd
Plan Hash: 2668798483
DELETE FROM SYS.WRI$_OPTSTAT_SYNOPSIS$
WHERE
 SYNOPSIS# IN (SELECT H.SYNOPSIS# FROM SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$ H
  WHERE H.BO# = :B1 AND H.GROUP# NOT IN (SELECT T.OBJ# * 2 FROM SYS.TABPART$
  T WHERE T.BO# = :B1 UNION ALL SELECT T.OBJ# * 2 FROM SYS.TABCOMPART$ T
  WHERE T.BO# = :B1 ))
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.13       8.63          0         11          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.13       8.63          0         11          0           0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  DELETE  WRI$_OPTSTAT_SYNOPSIS$ (cr=11 pr=0 pw=0 time=0 us)
      0   FILTER  (cr=11 pr=0 pw=0 time=0 us)
      0    PX COORDINATOR  (cr=11 pr=0 pw=0 time=0 us)
      0     PX SEND QC (RANDOM) :TQ10001 (cr=0 pr=0 pw=0 time=0 us cost=4809 size=519892260 card=17329742)
      0      HASH JOIN  (cr=0 pr=0 pw=0 time=0 us cost=4809 size=519892260 card=17329742)
      0       PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us cost=11 size=1180632 card=49193)
      0        PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us cost=11 size=1180632 card=49193)
      0         PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us cost=11 size=1180632 card=49193)
      0          TABLE ACCESS FULL WRI$_OPTSTAT_SYNOPSIS_HEAD$ (cr=0 pr=0 pw=0 time=0 us cost=11 size=1180632 card=49193)
      0       PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us cost=4795 size=103979388 card=17329898)
      0        TABLE ACCESS FULL WRI$_OPTSTAT_SYNOPSIS$ (cr=0 pr=0 pw=0 time=0 us cost=4795 size=103979388 card=17329898)
      0    UNION-ALL  (cr=0 pr=0 pw=0 time=0 us)
      0     TABLE ACCESS BY INDEX ROWID TABPART$ (cr=0 pr=0 pw=0 time=0 us cost=3 size=12 card=1)
      0      INDEX RANGE SCAN I_TABPART_BOPART$ (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)(object id 566)
      0     TABLE ACCESS BY INDEX ROWID TABCOMPART$ (cr=0 pr=0 pw=0 time=0 us cost=9 size=48 card=4)
      0      INDEX RANGE SCAN I_TABCOMPART_BOPART$ (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=368)(object id 588)
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  reliable message                                2        0.00          0.00
  enq: KO - fast object checkpoint                1        0.00          0.00
  PX Deq Credit: send blkd                      239        0.00          0.04
  PX Deq: Join ACK                                8        0.00          0.00
  os thread startup                              32        0.06          1.87
  PX Deq: Parse Reply                            33        0.00          0.04
  PX qref latch                                   8        0.00          0.00
  PX Deq Credit: need buffer                      7        0.00          0.00
  PX Deq: Execute Reply                         696        0.19          6.40
  resmgr:cpu quantum                              1        0.05          0.05
  PX Deq: Table Q Normal                         11        0.00          0.00
  PX Deq: Signal ACK RSG                         40        0.00          0.00
  PX Deq: Signal ACK EXT                         21        0.00          0.00
  latch: session allocation                      12        0.00          0.00
********************************************************************************

I think 1023 seconds down to 38 seconds is a benefit worth having for the overhead of running an alter table command. In production where there is much more data the difference will be even more significant.

There is even more to come once this database can be upgraded to 11.2.0.2 (currently 11.1.0.7) because at  that release the table  WRI$_OPTSTAT_SYNOPSIS$ becomes partitioned. 

The WRI$_OPTSTAT_SYNOPSIS$ table also contains two extra columns to enable the partitioning. Does this mean that an upgrade from 11.1.0.7 to 11.2.0.2+ would require all tables that have incremental stats to be subject to a full re-gathering of statistics, or is there some method to migrate/convert the statistics between versions? That is my next piece of research

About these ads

13 Responses to “Speeding up the gathering of incremental stats on partitioned tables”

  1. oracledoug said

    Thanks for this – very interesting and discusses a problem we encountered too, and on 11.2.0.1 if I recall correctly

  2. rnm1978 said

    Nice one John!

  3. G.P Singh said

    Hi,

    What degree did you use with , WRI$_OPTSTAT_SYNOPSIS.

    Thanks
    GP Singh

    • John Hallas said

      I left it deliberately vague in my post re the what degree of parallelism was used as it can be so variable per site.
      This was a pre-production server with 16 CPUs and I set the table to have a default degree of parallelism

      ALTER TABLE WRI$_OPTSTAT_SYNOPSIS$ PARALLEL

      In my case because it had a lot of parallel server processes available it used 96 threads, that might not be wanted in a production environment.
      I would experiment and determine the benefits gained from various degrees. On my system I might start with 8, then go to 16 and then 32. I don’t think I would have ever have got to 96 but the optimizer obviously reealised that capacity was available.

      Please let me know of any finding you have re reduction in processing time.

  4. G.P Singh said

    Thanks
    G Singh

  5. Hi John,

    Do you know if this is technically supported or not?

    Cheers, Doug

    • John Hallas said

      We asked Oracle the following question via an SR

      We are encountering major performance issues when gahtering incremental statistics on a large composite partitioned table in our
      data warehouse environment. Following the load of a days data, which update only 8 out of 3000 subpartitions, the stats gathering
      process takes more than three hours.

      On investigation it was noted that a significant proportion of this time was spent maintaining the WRI$_OPTSTAT* tables, and
      that this process was not using parallelism. We performed some tests in our perf.test environment,
      and the process took 32 minutes. When we enabled parallelism on these two tables, the process completed in less than one minute.
      Therefore, our question is: are there any potential issues/side-effects with enabling parallelism on these two tables permanently?
      The SR response was
      I don’t find much information on setting parallelism with the WRI$_OPTSTAT tables.However I did a SR search and see no problems reported where
      parallelism was set on these tables.

      We have taken that as acceptance and support of what we are doing and are implementing it on a couple of production systems that use incremental stats.

      How anybody interprets the above statements is entirely up to them and I do not take ny liability or responsibility for anything that may come of this. (bar superior performance, in which case I accept all the plaudits)

      John

  6. [...] http://jhdba.wordpress.com/2012/01/04/speeding-up-the-gathering-of-incremental-stats-on-partitioned-… [...]

  7. Randolf Geist on 11g Incremental Statistics…

    Well it wasn’t the post I planned to return to technical matters with. Lots of readers here have asked me when I’m going to get round to writing about 11g Incremental Statistics as part of the stats series. Although Incrementals are on my To Do list,…

  8. Greg Rahn said

    “Does this mean that an upgrade from 11.1.0.7 to 11.2.0.2+ would require all tables that have incremental stats to be subject to a full re-gathering of statistics, or is there some method to migrate/convert the statistics between versions? ”

    No, the upgrade script takes care of it.

  9. Donatello Settembrino said

    Hi John,
    if I do a describe of the table I see that the column SYNOPSIS#
    does not exist ..

    SQL> desc SYS.WRI$_OPTSTAT_SYNOPSIS$

    Name Null? Type
    —————- ———————————-
    BO# NOT NULL NUMBER
    GROUP# NOT NULL NUMBER
    INTCOL# NOT NULL NUMBER
    HASHVALUE NOT NULL NUMBER

    I missed something?

    thanks

  10. Recurring Conversations – Incremental Statistics (Part 3)

    As Connor pointed out in his comment on the last post, Incremental Statistics might still turn out to be a bit trickier than you thought once you start using them on many or very large tables, even once you’ve got the basics right. Which is why t…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 212 other followers

%d bloggers like this: