Oracle DBA – A lifelong learning experience

Archive for the ‘11g new features’ Category

Serious BUG alert – RMAN Duplicate overwrites datafiles at original location

Posted by John Hallas on February 11, 2014

Please be aware of the serious bug identified with RMAN duplicate database on the same node with versions 11.2.0.2 and 11.2.0.4

Bug 13498382  ORA-1122 ORA-1208 RMAN Duplicate overwrites datafiles at original location despite db_create_file_dest

This note gives a brief overview of bug 13498382.
The content was last updated on: 07-FEB-2014
Click here for details of each of the sections below.

Affects:

Product (Component) Oracle Server (Rdbms)
Range of versions believed to be affected Versions BELOW 12.2
Versions confirmed as being affected
Platforms affected Generic (all / most platforms affected)

Fixed:

The fix for 13498382 is first included in

Interim patches may be available for earlier versions – click here to check.

Symptoms:

Related To:

Description

New name for datafile created during recover may not be populated during the switch and catalog steps of duplicate command.

You can be affected by this issue if you execute the following steps with RMAN :

1. Take a backup from database using RMAN
    backup database;
    sql 'alter system switch logfile'

2. A datafile is added to the source database, or new tablespace is created
    create tablespace TC;
    sql 'alter system switch logfile'

3. RMAN Duplicated is executed to another DB on the same host.

     connect target sys/pwd@DB1
     connect auxiliary sys/pwd;
     run   {
     set until sequence <seq#>;
     duplicate target database to DB2;
    }  

4. The DB is recovered past the point in time the datafile was added

5. The datafile is created to the original location, instead of created at DB_FILE_CREATE_DEST, overwriting the file at SOURCE

At SOURCE, we see the datafile going OFFLINE after a checkpoint

Read of datafile '+DG/db_unique_name/datafile/tc.1232.769779085' (fno 7) header failed with ORA-01208
Rereading datafile 7 header failed with ORA-01208
Errors in file /diag/rdbms/db_name/db_unique_name/trace/db_unique_name_ckpt_9604.trc:

ORA-63999: data file suffered media failure
ORA-01122: database file 7 failed verification check
ORA-01110: data file 7: '+DG/db_unique_name/datafile/tc.1232.769779085'
ORA-01208: data file is an old version - not accessing current version

 At TARGET / AUX, we see the datafile being created at original location

alter database recover logfile  '+DG/db_unique_name/archivelog/2011_12_13/thread_1_seq_934.1321.769779115'

Media Recovery Log +DG/db_unique_name/archivelog/2011_12_13/thread_1_seq_934.1321.769779115
File #7 added to control file as 'UNNAMED00007'. Originally created as: '+DG/db_unique_name/datafile/tc.1232.769779085'
Errors with log +DG/db_unique_name/archivelog/2011_12_13/thread_1_seq_934.1321.769779115
Recovery interrupted!

Posted in 11g new features | Tagged: , , , | 2 Comments »

Invisible Indexes – a real world example and a bit of LogMiner

Posted by John Hallas on February 11, 2014

We had a requirement to perform a one-off update on a table of around 1.2M rows. There were 11K statements similar to the one below, with varying values and predicates updating a variety of rows from 20 to 500. The potential elapsed time was around 44 hours due to the volume of full table scans involved.

The obvious thing to do was to create an index but this was production and it was urgent so no time to go through the full route to live test cycle.

This seemed a great opportunity to use an invisible index with the theory being that the optimiser would not use it for any other of the currently running work and only our session would see it.

This demo shows how it was tested and what the benefits were along with a couple of thoughts on invisible indexes.

I must admit I wish I could get a consistent method of showing explain plans, sometimes they come out lined up properly, other times they are a mess. This entry is somewhere in between.

Firstly lets run an example update statement to see what access path it uses

SQL> Update JOHN.TEST_TABLE Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

Elapsed: 00:00:00.46

-------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT     |                      |    16 |   256 | 13696   (1)| 00:02:45 |       |       |
|   1 |  UPDATE              | TEST_TABLE |       |       |            |          |       |       |
|   2 |   PARTITION RANGE ALL|                      |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |
|*  3 |    TABLE ACCESS FULL | TEST_TABLE |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |
-------------------------------------------------------------------------------------------------------------

Create an index and re-run to see that the index will be used and the results are better, don’t forget  to gather stats.

CREATE INDEX “JOHN”.”INDX_1″ ON “JOHN”.”TEST_TABLE” (“SKU_ITEM_KEY”, “TXN_WK_KEY”);

execute dbms_stats.gather_table_stats(‘JOHN’, TEST_TABLE’,null,10,cascade=>true);

SQL> Update JOHN.TEST_TABLE Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

Elapsed: 00:00:00.01

------------------------------------------------------------------------------------------
| Id  | Operation         | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT  |                      |    27 |   432 |    20   (0)| 00:00:01 |
|   1 |  UPDATE           | TEST_TABLE |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| INDX_1               |    27 |   432 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Note the index can be made invisible from a command line statement but it does not seem possible if creating using the OEM12 gui

 alter index indx_1 invisible;

SQL> select index_name,visibility from user_indexes;

TEST_TABLE_PK        VISIBLE

INDX_1                         INVISIBLE

Run the query to prove that the invisible index is not used by the optimizer

 SQL> Update JOHN.dwd_rtl_sl_line_item Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

-------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT     |                      |    16 |   256 | 13696   (1)| 00:02:45 |       |       |
|   1 |  UPDATE              | TEST_TABLE |       |       |            |          |       |       |
|   2 |   PARTITION RANGE ALL|                      |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |
|*  3 |    TABLE ACCESS FULL | TEST_TABLE |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |
-------------------------------------------------------------------------------------------------------------

Now alter the session so only that session id can take advantage of the index

SQL> ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;

Session altered.

SQL>  Update JOHN.TEST_TABLE Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

------------------------------------------------------------------------------------------
| Id  | Operation         | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT  |                      |    27 |   432 |    20   (0)| 00:00:01 |
|   1 |  UPDATE           | TEST_TABLE |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| INDX_1               |    27 |   432 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

The code is running much more efficiently and yet we have not impacted the execution of any other sql that may be running against this table, although the index will need updating with every I,U,D statement, whether or not the index is visible  or not.

Moving to thoughts around how you can tell whether an index is visible or not and when it was made visible. USER_INDEXES will show the state of the index

SQL> select index_name,visibility from user_indexes;

TEST_TABLE_PK        VISIBLE

INDX_1                         INVISIBLE

How do you know when it was made invisible/visible though. Does the LAST_DDL time tell you?

OBJECT_NAM OBJECT_TYPE         CREATED                  LAST_DDL                 VISIBILIT

———- ——————- ———————— ———————— ———

INDX_1     INDEX               10-FEB-14 08:38          10-FEB-14 08:38          INVISIBLE

SQL> alter index indx_1 visible;

 select object_name,object_type, to_char(CREATED,’DD-MON-YY HH24:MI’) Created,

to_char(LAST_DDL_TIME,’DD-MON-YY HH24:MI’) Last_DDL,  Visibility

from user_indexes U, user_objects O where o.object_name = u.index_name

and index_name  = ‘INDX_1’

OBJECT_NAM OBJECT_TYPE         CREATED                  LAST_DDL                 VISIBILIT

———- ——————- ———————— ———————— ———

INDX_1     INDEX               10-FEB-14 08:38          10-FEB-14 10:41          VISIBLE

So yes, if the last DDL on the index was to change it’s visibility then the time can be identified. However what the DDL was cannot be identified and an alternative option is to use logminer functionality.

Ensure supplementary logging is set, perform some actions and then log mine and see if you can identify them.

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

SQL> alter index john.indx_1 invisible;

Index altered.

SQL> alter index john.indx_1 visible;

Index altered.

SQL> alter index john.indx_1 invisible;

Index altered.

alter session set nls_date_format = ‘DD-MON-YYYY HH24:MI:SS’;

execute dbms_start_logmnr( –

starttime => ’10-Feb-2014 12:00:00′, –

endtime => ’10-Feb-2014 12:34:00′, –

options => dbms_logmnr.dict_fom_online_catalog + –

dbms_logmnr.continuous_mine);

select operation, sql_redo,sql_undo from v$logmnr_contents

where seg_owner = ‘JOHN’ and seg_name = ‘INDX_1’ ;

select substr(sql_redo,1,100) sql_redo, to_char(timestamp,’DD-MON-RRRR HH24:MI:SS’) timestamp

from v$logmnr_contents

where seg_owner = upper(‘JOHN’)

and sql_redo like ‘%vis%’;

SQL_REDO          TIMESTAMP

—————————–

alter index john.indx_1 invisible;              10-FEB-2014 12:32:01

alter index john.indx_1 visible;                  10-FEB-2014 12:32:19

alter index john.indx_1 invisible;              10-FEB-2014 12:32:24

This last section on identifying when the index visibility was changed came from an Oracle-L discussion and the suggestion of log mining came from David Fitzjarrel. In fact it was because I had recently read the post that it gave me the idea to use an invisible index.

Posted in 11g new features, Oracle | Tagged: , , , , , , | 8 Comments »

11.2.0.4 new features and a OPTIMIZER_DYNAMIC_SAMPLING change

Posted by John Hallas on December 9, 2013

As of 27th August  2013, 11.2.0.4, the final release of 11GR2 was made available – a new features document is available . I will give a quick bullet list of the new features and then discuss one very important one that is not mentioned.

  •  Oracle Data Redaction – provides a new ASO (cost £££) option to redact specified information, even at runtime, from within the database
  •  Trace File Analyzer and Collector, is a diagnostic collection utility to simplify diagnostic data collection on Oracle Clusterware, Oracle Grid Infrastructure  – looks well worth investigating
  •  RACcheck – The Oracle RAC Configuration Audit Tool – to perform regular health checks as well as pre- and post-upgrade best practices assessments. Only written for linux so needed adapting for HPUX and needs to have the root password – an issue for many DBAs
  •  Database Replay Support for Database Consolidation – run lots of scenarios- batch, online, monthly process all at the same time even though captured at different periods.
  •  Optimization for Flashback Data Archive History Tables – use the OPTIMIZE DATA clause when creating or altering a flashback data archive.

So the one that has not appeared in that list, probably because it is not a new feature as such is tan additional value for the init.ora parameter OPTIMIZER_DYNAMIC_SAMPLING. This comes into play when a table does not have any statistics and the parameter is enabled. The previous default setting of 2  was to use dynamic statistics if at least one table in the statement has no statistics and the number of blocks that the statistics would be based on would be 64 and the value could range between 2 and 10 , each value doubling  the number of blocks that will be sampled. The new value of 11  means that the optimizer will gather dynamic statistics automatically whenever the optimizer deems it necessary and based on the number of blocks it thinks appropriate.

My testing has shown a couple of anomalies between the two versions it exists on ( 11.2.0.4, 12.1.0.1). Read the rest of this entry »

Posted in 11g new features, 12c new features, Oracle | Tagged: , , , , , , | Leave a Comment »

Large CLOBs and attention to detail needed while reading Oracle Support notes.

Posted by John Hallas on October 3, 2013

This post was forwarded to me by Vitaly Kaminsky who did work with me but has now bettered himself elsewhere. He writes :-

I have recently been involved with performance tuning of a database layer for the major Dutch website which was preparing for the “crazy days” of sales on the 2nd to 5th of October.

The general setup was as follows:
2-node physical RAC cluster with fast storage layer running ORACLE 11gR2 SE and ASM.

The important bit above is the “SE” which stands for Standard Edition and implies no performance packs, no AWR, no SQL tuning and no partitioning.

The shop application makes heavy use of Oracle Text functionality and the purpose of the tuning was to ensure we can get 12000 orders per hour through the system. Each of the orders would create a single insert and numerous updates of a CLOB in XML format. This is actually the total info we managed to get from the vendor on what the application performance tuning should be focused on.

As expected, after the first test runs, it became apparent that the stumbling block was literally a block of CLOB storage.  When there was enough padding space, the application ran fine and then, suddenly, the database would grind to a halt with “enq: HW – contention” waits. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , , | Leave a Comment »

SQL Plan Management – how to really utilize and not just implement

Posted by John Hallas on August 28, 2013

The title of this post is intentionally slightly unclear and hopefully it is intrigued people to view the post and even better, add their comments.

SQL Plan Management has been around since 11G came out which is back in 2007. It does not require a tuning pack, so the package DBMS_SPM can be used without additional licensing but if the SQL Tuning advisor is used to generate new profiles via tuning task then that does require a tuning pack license.

There are plenty of  articles available to show how to  use SPM to manage a SQL statement and ensure it has the best execution plan but what I am intrigued by is how to use it on an ongoing basis across a number of databases and across all standard (non ad-hoc) code i.e. how to implement a site standard for the use of SPM which can both be applicable to those databases that have two dedicated DBAs monitoring activity continuously (type 1) , the databases that we all have that pretty much run by themselves and need little maintenance (type 3)  and the multiple systems that lie in between those two types (type 2).

A colleague, Steve Senior has produced a flow chart of how we might deliver SPM across all systems but the stumbling block is how we manage changes on an ongoing basis, both those delivered because changing statistics (derived from data changes)  require new execution plans and after code changes then new plans will probably need to be evolved, plus the inclusion of totally new SQL statements, perhaps based on new tables which have been added to the schema. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , , | 2 Comments »

Rebuild of standby using incremental backup of primary

Posted by John Hallas on March 18, 2013

I have a long to-do list of things I want to test out and one is rebuilding a standby by using an incremental backup from primary. Then along comes a note from my ex-colleague Vitaly Kaminsky  who had  recently been faced with the problem when a customer relocated two Primary 2-node RACs and a single node standby databases to a new location and just happened to start the standby databases in read-only mode. Vitaly tells the story :-

As you may know, the read-only mode will prevent any redo logs being applied to standby database, but on the surface everything looks OK – no errors and MRP0 process is running and showing “applying log” in  v$managed_standby.

The only problem is – the recovery is “stuck” on the last log the database was trying to apply before it was opened in read-only mode.

Unfortunately, the customer did not notice the omission for over 2 weeks and by the time I have had a chance to look at the environments, there were about 50G of redo logs accumulated for each and some of them were missing and/or corrupt which excluded the possibility of SCPing the logs over to the standby server and registering them with standby databases.

One of another factors which caused a lack of  attention to the standby databases falling behind is the absence of any error messages in the alert logs – every single log was shown as shipped and received.

In a case like this, the only option is to rebuild the standby database and in the past I did it using the traditional RMAN duplicate for standby routine.  However, in this particular case I had 2 databases to rebuild – one is small and another is large. The network between primary cluster and standby was slow as well.

For the small database I decided to use Grid Control GUI based wizard for creating the standby database and this process is quite straightforward and described in the documentation. For the large one, however, duplicating the database using RMAN would be too slow,there may have been a  performance degradation during the run and the maintenance window was too short for out-of-hours run.

This was a perfect case to try “incremental backup” approach. This method is described in a number of sources (if you Google it) but none of the “vanilla” cases worked for me.

I will not be listing the names and detailed output due to the production nature of the work – just the list of steps.

So, this is what I did at the end of the day:

PRE-REQUISITES:

Primary database can be single node or RAC and running OK.
No downtime of Primary is required.
All Dataguard settings are intact

Step-by-step:

  1. 1.       Get the latest SCN from standby:
 select to_char(current_scn) from v$database;

10615562421
  1. 2.       Create incremental backup on Primary for all the changes since SCN on standby:
[oracle@primary backup]$ rman target /</pre>
connected to target database: PRIMARY (DBID=720063942)

RMAN> run

2> {

3> allocate channel d1 type disk;

4> allocate channel d2 type disk;

5> backup incremental from scn 10615562421 database format

6> '/tmp/backup/primary_%U';

7> release channel d1;

8> release channel d2;
<pre>9> }
  1. 3.       Create copy of control file on Primary:
 alter database create standby controlfile as ‘/tmp/backup/stby.ctl’;
  1. 4.       SCP the backup files and standby control file to the standby server. A little tip: if you copy the backup files to the directory with the same name (like /tmp/backup here), your controlfile will know about them and you can bypass the registration bit later on.
  1. 5.       The next step is to replace the standby control file with the new one. May sound simple, but this proved to be the trickiest part due to the fact that standby controlfile is OMF and in ASM.  You will need to use RMAN for the restore operation:

–          Switch the database to nomount, then:

restore controlfile from ‘/tmp/backup/stby.ctl';

–          Mount the database.

At this point you have the controlfile with the information about the files as they are on the Primary side, so, the next step is to register everything we have on Standby side:

catalog start with '+data/standby/';

Check the output and YES to register any reported standby files.

–          Shutdown immediate your standby instance.

 RMAN> switch database to copy;

RMAN> report schema;

On this stage you should have a nice and clean list of actual standby files.

 

Now we are ready to apply our differential backup to bring the standby in line with Primary:

RMAN> recover database noredo;

 Because the online redo logs are lost, you must specify the NOREDO option in the RECOVER command.

You must also specify NOREDO if the online logs are available but the redo cannot be applied to the incrementals.

If you do not specify NOREDO, then RMAN searches for redo logs after applying the incremental backup, and issues an error message when it does not find them.

When the recovery completes, you may start the managed recovery process again:

SQL> alter database recover managed standby database using current logfile disconnect;

Provided, all FAL settings are correct, your managed recovery will pick-up all logs generated on primary since the incremental backup and you will have fully synchronised standby again.

Posted in 11g new features, Oracle | Tagged: , , , | 4 Comments »

Using Datapump from SQLDeveloper

Posted by John Hallas on November 8, 2012

One problem that we all have is with exporting/importing between different versions of the datapump client. The following error is not uncommon

UDI-00018: Data Pump client is incompatible with database version 11.01.00.07.00

 Co-incidentally to having some problems with this, Vitaly Kaminsky, a colleague worked out a method of overcoming this by using SQLDeveloper and below is his document describing how to do it. Any kudos to me please, any problems contact him at uk.linkedin.com/pub/vitaly-kaminsky/20/434/244/

The “quick and dirty” way to save time while copying data, DDL, schemas and objects between Oracle databases using SQL Developer vs traditional Exp/Imp routines.

As every DBA knows, small, daily tasks and requests like “will you please copy…” or “please refresh..” may quickly consume considerable amount of time and effort, leaving you wondering where the day has gone. One of the most convenient ways to satisfy those requests is to use free tools like SQL Developer (and yes, there are many others, like Toad, but you have to pay your license fees).

Most of us have a considerable estate to look after, often consisting of some large production clusters and dozens, or hundreds, or thousands of test, development, integration, UAT and other databases, running on VMs or physical boxes.

In case of using traditional expdp/impdp routines, copying data and DDL between those small DBs may require more time for the setup than the actual process of moving data. I would estimate the time required to check the filesystem, permissions, create directory objects etc. to be in the region of 20 to 30 min per request. This is the actual time which you save by using SQL Developer Database Copy feature, because the other bits, like creating the script, specifying schemas, etc. will take about the same time.

The actual movement of data by Database Copy is performed by SQL Developer by running DDL and DML on the target system and pumping the I/O via the initiating workstation. This is the limiting factor and I would suggest to use this method if the actual volume of data does not exceed 500GB, otherwise, the setup time-saving will be lost on I/O.

Using SQL Developer 3.1.06, the process itself is wonderfully simple, you just need to make sure your connections have been setup properly with sufficient privileges:

1. Navigate to Database Copy in Tools menu:       

2. Select Source and Target databases and whether you want to copy schemas or objects:

3. Select required objects (ALL for full schemas):

4. Select source schema(s):

5. Select objects to copy (this example shows the selection of system objects which is sufficient for the demo):

6. On the next step you can walk through the objects to apply any required filtering:

7. Proceed to summary:

8. Once you press FINISH, the process will start and you can monitor the progress in the log window (just an example of text here):

That’s it, all done.

The limitation of the above is that you can’t save the specs for subsequent reuse, but the whole purpose of this exercise is to save time for one-off requests.

 

 

Posted in 11g new features, Oracle | Tagged: , , | 1 Comment »

Permissions problem with 11.2.0.3 and tnsnames.ora

Posted by John Hallas on November 5, 2012

There is a Bug documented in MoS regarding the setting of permissions by the root.sh script (which calls roothas.pl).  This causes the ownership of grid home to be owned by root and permissions given to oinstall group

app/gridsoft/11.2.0.3 $ls -ld

drwxr-x---  65 root       oinstall      2048 Feb 27  2012 . 

This causes any user who is not in the oinstall group not  to be able to run any programs such as sqlplus. The bug reference and title is Bug 13789909 : SIHA 11.2.0.3 INSTALL CHANGES THE GRID HOME PERMISSION TO 750 .

The bug is dismissed as being not a problem because nobody should be running executables from the grid home, they should be running from the RDBMS home. A fair point until you consider the location of the tnsnames.ora file.  Any user owning  a dblink needs to access the tnsnames file and even if you link the entry in Grid/network/admin to RDBMS/network/admin the user still does not have access to tnsnames.ora file.

This has only happened in 11.2.0.3 and only on standalone RAC installs. It applies to HPUX and OEL5 s far as I am aware although it was only reported against OEL.  The resolution is easy enough – in our case it would be

chmod 755 /app/gridsoft/11.2.0.3

 However I do think oracle should address this as the bug it is and not ignore it.

Posted in 11g new features, security | Tagged: , , | 3 Comments »

Excellent Optimizer Statistics articles

Posted by John Hallas on April 12, 2012

For anybody who is interested in reading about optimizer statistics and gaining a clear understanding on what they can do and how they can be managed then I suggest reading the following two white papers

Part 1 – Understanding Optimizer Statistics

Part 2 – Best Practises for Gathering Optimizer Statistics

Part 2 contains the best, most easily understood explanation of the problems with bind variable peaking and how they were addressed by using adaptive cursor sharing that I have seen.

Overall both documents are well written with good explanations and diagrams and I think anybody who has any interest in the Oracle Database engine and the tuning of databases for both consistency and performance should make these articles a must read. 

 

 

Posted in 11g new features, Oracle | Tagged: , , , | Leave a Comment »

Speeding up the gathering of incremental stats on partitioned tables

Posted by John Hallas on January 4, 2012

11G introduced incremental  global stats and the table WRI$_OPTSTAT_SYNOPSIS$ contains synopsis data for use in maintaining the global statistics. This table can grow very large and Robin Moffat has produced a good blog  post about  the space issues  – note we both worked at the same site so it is the same DW being discussed by both of us.

Apart from the space usage that Robin refers to, another worrying aspect is the time taken when gathering stats on a partitioned table and most of that time is taken by running a delete statement

DELETE
FROM SYS.WRI$_OPTSTAT_SYNOPSIS$
WHERE SYNOPSIS# IN
(SELECT H.SYNOPSIS#
FROM SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$ H
WHERE H.BO# = :B1
AND H.GROUP# NOT IN
(SELECT T.OBJ# * 2
FROM SYS.TABPART$ T
WHERE T.BO# = :B1
UNION ALL
SELECT T.OBJ# * 2
FROM SYS.TABCOMPART$ T
WHERE T.BO# = :B1))

I will demonstrate the problem and a simple solution and you will be able to see the significant performance improvements achieved. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , | 13 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 261 other followers