Oracle DBA – A lifelong learning experience

Purging statistics from the SYSAUX tablespace

Posted by John Hallas on May 19, 2009

Update July 2014 – I have added a parallel post to this which may help when trying to manage the growth of the SYSAUX tablespace

http://jhdba.wordpress.com/2014/07/09/tidying-up-sysaux-removing-old-snapshots-which-you-didnt-know-existed/

 

Whenever statistics in the dictionary are modified, old versions of statistics are saved automatically for future restoring. The old statistics are purged automatically at regular intervals based on the statistics history retention setting and the time of recent statistics gathering performed in the system. Retention is configurable using the ALTER_STATS_HISTORY_RETENTION procedure. The default value is 31 days.
That is normally not too big an issue but in our Peoplesoft environments we run a lot of gather_stats jobs and if the retention period is not managed then the SYSAUX tablespaces can grow very large. In one of our systems the SYSAUX tablespace was 37Gb with over 32Gb consisting of the stats tables and assoc iated indexes. This blog entry will provide the scripts to diagnose and correct excessive tablespace growth due to retained statistics

 set linesize 120
set pagesize 100

COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40

SELECT	occupant_name "Item",
	space_usage_kbytes/1048576 "Space Used (GB)",
	schema_name "Schema",
	move_procedure "Move Procedure"
FROM v$sysaux_occupants
ORDER BY 1
/
Item                      Space Used (GB) Schema                    Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
AO                                    .00 SYS                       DBMS_AW.MOVE_AWMETA
EM                                    .08 SYSMAN                    emd_maintenance.move_em_tblspc
EM_MONITORING_USER                    .00 DBSNMP
EXPRESSION_FILTER                     .00 EXFSYS
JOB_SCHEDULER                         .00 SYS
LOGMNR                                .01 SYSTEM                    SYS.DBMS_LOGMNR_D.SET_TABLESPACE
LOGSTDBY                              .00 SYSTEM                    SYS.DBMS_LOGSTDBY.SET_TABLESPACE
ODM                                   .00 DMSYS                     MOVE_ODM
ORDIM                                 .00 ORDSYS
ORDIM/PLUGINS                         .00 ORDPLUGINS
ORDIM/SQLMM                           .00 SI_INFORMTN_SCHEMA
SDO                                   .00 MDSYS                     MDSYS.MOVE_SDO
SM/ADVISOR                            .02 SYS
SM/AWR                                .15 SYS
SM/OPTSTAT 11.44 SYS
SM/OTHER                              .02 SYS
STATSPACK                             .00 PERFSTAT
STREAMS                               .00 SYS
TEXT                                  .00 CTXSYS                    DRI_MOVE_CTXSYS
TSM                                   .00 TSMSYS
ULTRASEARCH                           .00 WKSYS                     MOVE_WK
ULTRASEARCH_DEMO_USER                 .00 WK_TEST                   MOVE_WK
WM                                    .00 WMSYS                     DBMS_WM.move_proc
XDB                                   .00 XDB                       XDB.DBMS_XDB.MOVEXDB_TABLESPACE
XSAMD                                 .00 OLAPSYS                   DBMS_AMD.Move_OLAP_Catalog
XSOQHIST                              .00 SYS                       DBMS_XSOQ.OlapiMoveProc

How long old stats are kept

 select dbms_stats.get_stats_history_retention from dual;

Set retention of old stats to 10 days

 exec dbms_stats.alter_stats_history_retention(10);

Purge stats older than 10 days (best to do this in stages if there is a lot of data (sysdate-30,sydate-25 etc)

 exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Show available stats that have not been purged

 select dbms_stats.get_stats_history_availability from dual;

Show how big the tables are and rebuild after stats have been purged

col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like 'WRI$_OPTSTAT%'
and segment_type='TABLE'
group by segment_name,segment_type order by 1 asc
        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 WRI$_OPTSTAT_OPR                         TABLE
         0 WRI$_OPTSTAT_AUX_HISTORY                 TABLE
        88 WRI$_OPTSTAT_TAB_HISTORY                 TABLE
       126 WRI$_OPTSTAT_IND_HISTORY                 TABLE
       158 WRI$_OPTSTAT_HISTGRM_HISTORY             TABLE
     4,482 WRI$_OPTSTAT_HISTHEAD_HISTORY            TABLE

Show how big the indexes are ready for a rebuild after stats have been purged

col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
group by segment_name,segment_type order by 1 asc
/
        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
         0 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
        88 I_WRI$_OPTSTAT_TAB_ST                    INDEX
       105 I_WRI$_OPTSTAT_IND_ST                    INDEX
       105 I_WRI$_OPTSTAT_H_ST                      INDEX
       195 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
       213 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX
       214 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
     2,055 I_WRI$_OPTSTAT_HH_ST                     INDEX
     3,883 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX

Note that you cannot enable row movement and shrink the tables as the indexes are function based

 alter table WRI$_OPTSTAT_IND_HISTORY enable row movement;
alter table WRI$_OPTSTAT_IND_HISTORY shrink space;
*
ERROR at line 1:
ORA-10631: SHRINK clause should not be specified for this object

 

 select 'alter table '||segment_name||'  move tablespace SYSAUX;' from dba_segments where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%' and segment_type='TABLE'

Run the rebuild table commands – note that this does cause any gather_stats jobs to fail

alter table WRI$_OPTSTAT_TAB_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_IND_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTHEAD_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTGRM_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_AUX_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_OPR  move tablespace sysaux;
alter table WRH$_OPTIMIZER_ENV  move tablespace sysaux;

Script to generate rebuild statements

 select 'alter index '||segment_name||'  rebuild online parallel (degree 14);' from dba_segments where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%' and segment_type='INDEX'

Once completed it is best to check that the indexes (indices) are usable

 select  di.index_name,di.index_type,di.status  from  dba_indexes di , dba_tables dt
where  di.tablespace_name = 'SYSAUX'
and dt.table_name = di.table_name
and di.table_name like '%OPT%'
order by 1 asc
/
SQL>
INDEX_NAME                     INDEX_TYPE                  STATUS
------------------------------ --------------------------- --------
I_WRI$_OPTSTAT_AUX_ST          FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST  FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_HH_ST           FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_H_ST            FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_IND_OBJ#_ST     FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_IND_ST          FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_OPR_STIME       FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_TAB_OBJ#_ST     FUNCTION-BASED NORMAL       VALID
I_WRI$_OPTSTAT_TAB_ST          FUNCTION-BASED NORMAL       VALID
WRH$_OPTIMIZER_ENV_PK          NORMAL                      VALID

Finally lets see what space has been saved with a retention date of 1 day and a gather schema stats for the SYSASDM schema

 exec dbms_stats.alter_stats_history_retention(1);
select dbms_stats.get_stats_history_retention from dual;
        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 WRI$_OPTSTAT_OPR                         TABLE
         0 WRI$_OPTSTAT_AUX_HISTORY                 TABLE
         3 WRI$_OPTSTAT_TAB_HISTORY                 TABLE
         4 WRI$_OPTSTAT_IND_HISTORY                 TABLE
         8 WRI$_OPTSTAT_HISTGRM_HISTORY             TABLE
       104 WRI$_OPTSTAT_HISTHEAD_HISTORY            TABLE

        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
         0 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
         2 I_WRI$_OPTSTAT_IND_ST                    INDEX
         2 I_WRI$_OPTSTAT_TAB_ST                    INDEX
         3 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
         4 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
         5 I_WRI$_OPTSTAT_H_ST                      INDEX
         9 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX
        41 I_WRI$_OPTSTAT_HH_ST                     INDEX
        96 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX

I have also added another post on a similar theme. It may be useful
About these ads

44 Responses to “Purging statistics from the SYSAUX tablespace”

  1. [...] to this blog I found script below wich displays the various sysaux components and their [...]

  2. Senthil said

    Thanks. Great Job.

  3. Nitin said

    Hi,

    I followed above steps & got stuck @ index rebuild, so request you to follow below steps, if you stuck@ index rebuild.

    — To implement the solution, please execute the following steps::
    1- Take a full backup of the database
    2- Move the tables:

    For indexes, find the indexes for the above tables and rebuild them. In case an index is unusable, please refer the following example:

    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_TAB_ST’,’SYS’) from dual;

    Then drop and recreate the indexes using the obtained DDL’s.

    Regards,

    Nitin Ghodke

  4. Magpie said

    This is awesome. Thanks for the step-by-step.
    Modifying the snapshot settings retention never seemed to control the SYSAUX space for me. This did the trick.

  5. Aditya said

    Thanks for sharing really useful information.

    Cheers/Aditya

  6. karthik said

    very helpful for me when hit the space issue with sysaux tablespace on a production database, much appreciated..

  7. punit said

    Thanks John,

    very nice explanation ….and easy steps to follow..

    Punit

  8. punit said

    Great….easy to understand
    just add 1 step

    drop indexes and recreate if unusable
    select dbms_metadata.get_ddl(‘INDEX’,’WRH$_OPTIMIZER_ENV_PK’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_AUX_ST’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_OPR_STIME’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_TAB_ST’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_IND_ST’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_TAB_OBJ#_ST’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_IND_OBJ#_ST’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_HH_ST ‘,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST’,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_H_ST ‘,’SYS’) from dual;
    select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST’,’SYS’) from dual;

  9. Driss said

    A good article

  10. [...] backup script – example – logging outputWhere has consistent=y gone when using Datapump exportPurging statistics from the SYSAUX tablespaceHow to move the central Inventory (oraInventory) to another location. Purging ADR log and trace [...]

  11. [...] Purging statistics from the SYSAUX tablespace This entry was posted in Uncategorized. Bookmark the permalink. ← Recovering space of perfstat indexes LikeBe the first to like this post. [...]

    • maryel said

      A very useful post!!! Thanks!

      one small addition. When running dbms_metadata, you need to format to get the entire string.

      1* select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_TAB_ST’,’SYS’) from dual
      SQL> /

      DBMS_METADATA.GET_DDL(‘INDEX’,’I_WRI$_OPTSTAT_TAB_ST’,’SYS’)
      ——————————————————————————–

      CREATE INDEX “SYS”.”I_WRI$_OPTSTAT_TAB_ST” ON “SYS”.”WRI$_OPTSTAT_TAB_HISTORY

      SQL> set long 5000
      SQL> set longc 5000
      SQL> select dbms_metadata.get_ddl(‘INDEX’,’I_WRI$_OPTSTAT_TAB_ST’,’SYS’) from dual;

      DBMS_METADATA.GET_DDL(‘INDEX’,’I_WRI$_OPTSTAT_TAB_ST’,’SYS’)
      ——————————————————————————————————————————————————————————-

      CREATE INDEX “SYS”.”I_WRI$_OPTSTAT_TAB_ST” ON “SYS”.”WRI$_OPTSTAT_TAB_HISTORY” (SYS_EXTRACT_UTC(“SAVTIME”))
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE “SYSAUX”

  12. Thanks John for sharing knowledge, and this article is really helpful. !!!!!!!

  13. [...] Posts Purging statistics from the SYSAUX tablespaceRMAN backup script – example – logging outputORA-19809: limit exceeded for recovery files – [...]

  14. [...] John Hallas’ “Purging statistics from the SYSAUX tablespace” blog entry quite informative and detailed on managing occupants in the SYSAUX [...]

  15. Noons said

    This continues to be an often referenced post and is part and parcel of my toolkit nowadays.
    Thanks a lot for publishing it and season’s greetings to you.

    • John Hallas said

      Thanks Noons,
      This post gets the most hits of all on my blog and I use it a lot myself as it contains a number of small commands that I can never quite remember the right systax to.

  16. Patrick said

    Great article, well laid out. Just a warning that the purge can take a very long time, this is due to an Oracle bug, which is fixed in 11.2.0.3 or 11.2.0.2 Patch 7 on Windows Platforms.

    https://supporthtml.oracle.com/epmos/faces/ui/km/DocumentDisplay.jspx?id=10279045.8

  17. [...] this but no luck reducing space in the tablespace. I also tried following the instructions at http://jhdba.wordpress.com/2009/05/19/purging-statistics-from-the-sysaux-tablespace/ but the space is still [...]

  18. Shiva said

    Got the below error while dropping the index WRH$_OPTIMIZER_ENV_PK.

    DROP index WRH$_OPTIMIZER_ENV_PK
    *
    ERROR at line 1:
    ORA-02429: cannot drop index used for enforcement of unique/primary key

    • Shiva said

      Then executed the below steps

      alter table SYS.WRH$_OPTIMIZER_ENV MODIFY constraint WRH$_OPTIMIZER_ENV_PK disable;

      Table altered.

      DROP index WRH$_OPTIMIZER_ENV_PK;
      DROP index WRH$_OPTIMIZER_ENV_PK
      *
      ERROR at line 1:
      ORA-01418: specified index does not exist

      CREATE UNIQUE INDEX “SYS”.”WRH$_OPTIMIZER_ENV_PK” ON “SYS”.”WRH$_OPTIMIZER_ENV” (“DBID”, “OPTIMIZER_ENV_HASH_VALUE”)
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE “SYSAUX”;

      Index created.

  19. Danish said

    We have 3 datafiles attached to our sysaux tablespace(SYSAUX1.dbf, SYSAUX2.DBF and SYSAUX3.dbf). Client want us to reduce the number of datafiles from 3 to 1.
    Is there any way we can reduce the number of datafiles for sysaux tablespace. Please suggest.

    Thanks,
    Danish

    • John Hallas said

      One question would be to ask your client what is their reason for wanting 3 datafiles instead of 1. You have probably reached the max size of your current SYSAUX t/s which is based on blocksize 8K = 32Gb size
      You could achieve this is a number of ways – if you could be bothered.

      1) Create a new database with the sysaux t/s as a 32K blocksize (assuming you have 8K curently) – not recommended.
      2) Create a new SYSAUX2 t/s with a bigfile tablespace and re-organise all the objects in SYSAUX into SYSAUX2, drop SYSAUX and then recreate as a bigfile and move everything back or rename the SYSAUX2 t/s to SYSAUX – not recommended and probably not supported by Oracle (continuing with SYSAUX2 as you may need to move everything back into SYSAUX before any upgrade)
      3) export and then import the data into a new database.

      There are many more options which depend on how big your current datafiles are and whether you have space to organise internally and make one datafile much smaller than the others (shrink it) but you will stil have 3 datafiles. You could also look at the dbms_redefinition package.

      Overall I think it is much too much work and too risky for little benefit.

      I hope that helps
      John

      • Danish said

        Hi John,

        Thanks for your reply. =
        right now we have 3 DBF, each of the dbf is sized at 32G.
        we purged few awr tables as it was recommended by oracle in one of the SR. Right now at OS level we are having over 100GB of datafiles for sysaux tbs however internally we only have 7 GB of data in the sysaux tablespace. Here the client wants to reclaim space at mountpoint level.

        I told them that we can shrink the DBF and reclaim the space, but the client is asking if we can reduce the number of datafiles as well.

        Thanks,
        Danish

      • John Hallas said

        We are having similar problems on a DW with many tens of GB of AWr data – mostly global incremental stats – the synopsis tables. I will probably blog about that soon
        I dont see how you can drop a datafile easily and I dont know what benefit it will bring anywy.

  20. Ray Hachem said

    Nice article. Thanks!

  21. Thank you for this concise walkthrough, John!
    We used your article as an inspiration at a client’s site where the retained stats became a runaway storage issue. Just the other day, this saved us 38 GB of storage space!

    Our slightly different approach was not to use the long-running purge_stats procedure, but instead

    – create table tmp as select * from wri$…
    – truncate table wri$… drop storage;
    – insert /*+ append */ into wri$… select * from tmp
    – check index validity and rebuild where necessary.

    Cheers,
    Uwe

    • John Hallas said

      I am going to post shortly on a similar method myself as well as dropping and recreating the AWR package when you have a large snapshot range gathered over time. Your comment has focussed me more on that now – Thanks

  22. Ben said

    Great guidelines!
    I am seeing this alert in our production ERP database but the largest is only 427 MB. Do we need to be concerned and run the steps you laid out?

    MB SEGMENT_NAME SEGMEN
    ———- —————————————- ——
    0 WRH$_OPTIMIZER_ENV_PK INDEX
    0 I_WRI$_OPTSTAT_AUX_ST INDEX
    30 I_WRI$_OPTSTAT_OPR_STIME INDEX
    57 I_WRI$_OPTSTAT_TAB_ST INDEX
    80 I_WRI$_OPTSTAT_TAB_OBJ#_ST INDEX
    88 I_WRI$_OPTSTAT_IND_ST INDEX
    101 I_WRI$_OPTSTAT_IND_OBJ#_ST INDEX
    178 I_WRI$_OPTSTAT_H_ST INDEX
    208 I_WRI$_OPTSTAT_HH_ST INDEX
    402 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST INDEX
    427 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST INDEX

  23. Ben said

    Thanks for the great guidelines!

  24. Ben said

    I am getting this error when I am doing it in my dev database (11.2.0.3, 64 bit on Windows 2008R2).

    SQL> alter table WRI$_OPTSTAT_HISTHEAD_HISTORY move tablespace sysaux;
    alter table WRI$_OPTSTAT_HISTHEAD_HISTORY move tablespace sysaux
    *
    ERROR at line 1:
    ORA-14511: cannot perform operation on a partitioned object

    SQL> alter table WRI$_OPTSTAT_HISTGRM_HISTORY move tablespace sysaux;
    alter table WRI$_OPTSTAT_HISTGRM_HISTORY move tablespace sysaux
    *
    ERROR at line 1:
    ORA-14511: cannot perform operation on a partitioned object

  25. Jason said

    I got stucked at:
    alter table WRI$_OPTSTAT_TAB_HISTORY move tablespace sysaux;

    with this error:
    ORA-01652: unable to extend temp segment by 128 in tablespace SYSAUX

    Our SYSAUX tablespace uses raw device which is a volume created by VxVM. When SYSAUX is full it means all the space in the volume is consumed. I’m wondering if i can expand the volume to finish the table move, then change the volume size back?

    • John Hallas said

      Why not move the table to a different tablespace and then back again?

      • Jason said

        I thought it was not safe. For example after I moved the table out of SYSAUX and before move it back, there is data that need to be written to that table, what happens?

  26. […] Purging statistics from the SYSAUX tablespace […]

  27. […] Purging statistics from the SYSAUX tablespace […]

  28. […] http://jhdba.wordpress.com/2009/05/19/purging-statistics-from-the-sysaux-tablespace/ […]

  29. Hello John,
    Our synopsis table WRI$_OPTSTAT_SYNOPSIS$ grew out of proportions after we realized that we had turned on incremental stats.Now that we turned it off, after looking everywhere I finally sought Oracle support to purge the rows in that table. They gave a solution that worked in my QA environment by wiping out the table. But the process issued delete statements and hence did not free up space. Right now that table is empty but with plenty of subpartitions and consuming 7GB.
    What would be your recommendation to get back that space? Appreciate your help on this.

    Regards
    Kumar

    • John Hallas said

      If the table is empty I would either drop the sub-partitions or more likely just get the ddl from dbms_metadata (or sql developer) and drop and recreate the table. Doesn’t sound too difficult or risky

      Good luck

      John

  30. Hello John, Thanks for your response via email.
    I figured out after deeper search that there is a hidden parameter that we can use it to speed up the purge. By default it is set to 10000 records per delete. Hence it has to scan the same table 10 times for deleting just 100K records. Since this is a session level parameter I tested it on our dev environment to make sure it works without issue, I moved it to my script in prod. I set it to 5M right now so that for each day it completely deletes all records in one pass. I loop the days thru until it hits my retention.
    The parameter is _optimizer_purge_stats_iteration_row_count. FYI, you might already knew this. just in case.

    Also I have another question. Upon the table move of hist tables above, you mentioned any gather_stats job would fail. But if we move it and rebuilt the indexes without issues, the next dbms_stats should not have any issue, right? Please confirm.

    Regards
    Kumar

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 196 other followers

%d bloggers like this: