Oracle DBA – A lifelong learning experience

Archive for the ‘11g new features’ Category

Progress update on my AWR repository

Posted by John Hallas on November 20, 2014

I received an email from a team member yesterday

Have you seen this?

Interesting idea – ever thought of implementing it?

Well of course I have implemented a AWR data repository and I thought I would catch-up on where I am with it and how it is being used.

The database started out on HPUX on about 2 years ago and is now on Linux on The repository now holds a total of 139 DBIDs and we have data going back 400 days for the majority of them. The storage is over 2Tb, of which the SYSAUX tablespace contains about 95% – we are compressing some data but that is an area I need to look into further

Why 400 days you might ask – well I work in a Retail organisation and as Easter is a movable feast (pun intended) and that period allows to provide data covering Easter whenever it might fall. It is possible that we would not have data for Easter if we only kept a 365 day period and Easter is a key trading period, second only to Xmas.

I suppose the obvious question is how we have used the data and was it all worth the effort.

The repository has been used for workload forecasting, security reviews, auditing, performance reviews and validation of our existing systems.

Workload forecasting is an interesting and unexpected benefit of the repository. When I originally proposed the idea it wasn’t something that had occurred to me but it has proved very useful. The process is to review how much work a system has done a year ago, look at current usage, calculate the difference and then use that to project forward. For the system that I am discussing which looks at how long it takes us to process sales data from stores we have used a combination of metrics including db time, disk read times, both sequential and scattered to produce a magic number which is what we are working against. That forecast is now being mapped against actuals and the forecast is proving quite accurate. That has allowed us to prepare the database server to support increased volumes by changing disk, adding CPU and memory to maintain and improve against last year’s performance with increased data capture and store numbers. Read the rest of this entry »

Posted in 11g new features, 12c new features, Oracle, security | Tagged: , | 5 Comments »

Preventing standby databases opening in Active DataGuard mode + chopt !!!

Posted by John Hallas on August 6, 2014

It is long been a concern of mine that it is relatively easy to open a standby database  inadvertently and then be potentially liable to a licensing cost as usage is shown in the DBA_FEATURE_USAGE_STATISTICS view.

In fact we logged an SR on the subject based on experiences on failing over using DataGuard Broker

On dgmgrl switchover the Oracle restart database ‘start option’ property is not modified to ‘mount’ database even though dgmgrl modifies the Oracle restart database role property to PHYSICAL_STANDBY. Nor does a dgmgrl switchover modify the Oracle restart database ‘start option’ property to ‘open’ database even though dgmgrl modifies the Oracle restart database role property to PRIMARY. The potential is there to accidentally open a standby database and invoke Active dataguard on any reboot/restart of services which is what we do not want.

After a fair bit of discussion we got this rather unhelpful response

Starting in 11, broker no longer changes the state of the database, i.e., from mount to open.  It is up to the user what state they want their database in and to setup start_option appropriately.  
In the course of role changes or other operations that involve database restarts, the broker will restart the database based on the role of the database, i.e., open for PRIMARY database and mounted for PHYSICAL STANDBY.  
If the user wants the new standby to only be in the mounted mode for “srvctl start database …”, they must manually change the start_options for the databases appropriately after the role change – they will have to change it to open for the new primary as well.
In other words, what you are seeing is as designed!
We recommend that you leave the start option to OPEN for both databases, so that regardless of who the primary is, you can be sure it will be opened after a role change. This of course means the standby will be opened, which is okay as long as the user has paid for the Active DG option.
If the user has not paid for the Active DG option, you should modify the start option for both databases prior to the role change, so that after the role change the new primary will open and the new standby will be mounted.
The broker does not modify the startup options during a role change in the Clusterware Repository. It only modifies the role of the database.
Set the start options to OPEN for both databases and you won’t run into this issue.

They did provide a script but we ended up using a startup trigger something like this

CTL varchar(10);
execute immediate 'alter system set dg_broker_start=FALSE scope=memory';

However that is not the purpose of this post. It is more about coming across an undocumented parameter from 11Gr2 which enforces that a standby cannot be opened (and activate the need for an ADG license) providing the MRP process is running. I think that came from a  post by Fritz Hoogland on the Oracle-L list. This is exactly what we wanted and a colleague Hamid Ansari prepared a summary and test of the parameter Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , , | Leave a Comment »

The value of audit_trail=DB,EXTENDED

Posted by John Hallas on July 15, 2014

I was recently reading a blog entry by Dominic Brooks regarding auditing and I was intrigued by the line referring to the audit_trail parameter being set to DB, EXTENDED

Behaves the same as AUDIT_TRAIL=DB, but also populates the SQL bind and SQL text CLOB-type columns of the SYS.AUD$ table, when available.

Nothing wrong with that and straight from the manual but I was surprised that they were CLOBS. However on looking at them they are NVARCHAR2(2000) so CLOB-type. No issues there.

I have long thought that there were 3 benefits of having the extended value set in audit_trail

1)      Adds the DBID to the syslog file when using OS auditing for SYSDBA and SYSOPER activities. No real issue if you only have a single database on the server but if running several databases it is really mandatory otherwise you cannot tell the entries apart.

My research shows that this is no longer true from 11GR2 onwards and the DBID is always written to the syslog file now

2)      To capture the sql binds and text of any statement run as SYSDBA. This is fundamentally the purpose of using a syslog file, otherwise the DBA can do bad things and delete the entries from the AUD$ table. Having the syslog file set to capture that information and not be editable by other than the root user means that a record of all activity is kept.

Doing some testing, again with the jolt provided by Dom’s blog I found that it did not matter whether the audit_trail was DB or DB,EXTENDED, all the activity and values carried out by SYSDBA were written to that file.

From the documentation

Auditing User SYS and Users Who Connect as SYSDBA and SYSOPER
You can fully audit sessions for users who connect as SYS, including all users connecting using the SYSDBA or SYSOPER privileges. This enables you to write the actions of administrative users to an operating system file, even if the AUDIT_TRAIL parameter is set to NONE, DB, or DB, EXTENDED. Writing the actions of administrator users to an operating system audit file is safer than writing to the SYS.AUD$ table, because administrative users can remove rows from this table that indicate their bad behavior.


3)      To catch the activities of ‘ordinary’ users and to record the values of their sql statements, provided sufficient auditing has been enabled

A simple example

User AUDIT_TEST owns a table TEST1

 audit update table, delete table, insert table by AUDIT_TEST by access



For i in 1..5loop

Insert into test1 values ('Peter',i);

End loop;




select sql_bind,sql_text from dba_audit_trail where username='AUDIT_TEST'
------------ ---------- ----------------------------------------
INSERT       #1(1):1   INSERT INTO TEST1 VALUES ('Peter',:B1 )
INSERT       #1(1):2   INSERT INTO TEST1 VALUES ('Peter',:B1 )
INSERT       #1(1):3   INSERT INTO TEST1 VALUES ('Peter',:B1 )
INSERT       #1(1):4   INSERT INTO TEST1 VALUES ('Peter',:B1 )
INSERT       #1(1):5   INSERT INTO TEST1 VALUES ('Peter',:B1 )

Setting the audit_trail parameter to DB we do not see the values used which really makes whole exercise of auditing rather pointless.



So in summary, 2 out of my 3 reasons for enabling EXTENDED auditing have been blown out of the water. However the remaining one is justification in itself and I see no reason why every production system should not have it enabled.

At my site ENABLED and writing to a  SYSLOG  file is enabled by default as part of the build on every database we have . We have gone a little too far as the DBAs do not have access to even read the file never mind delete entries but as I sit down with external auditors a couple of times a year I do know they are always impressed that I can demonstrate that even with SYSDBA privileges all my activities are securely audited.

Posted in 11g new features, security | Tagged: , , , , , | 2 Comments »

aud$ and ASSM – a good combination

Posted by John Hallas on March 6, 2014

We had a 8 node RAC cluster that was showing evidence of connections timing out. The following AWR report segments indicates a high level of login activity and blocking on the AUD$ table and segments.




None of it conclusive but what was the kicker was the fact that the aud$ table was still in the system tablespace which is manually  managed and therefore automatic segment space management is not coming into play.

Over a 24 hour period there were over 50K connections, all being audited and the aud$ table was on the system tablespace which was manual and not running ASSM.

On all our systems we migrate the aud$ table to an ASSM managed tablespace (normally SYSAUX) after the build but this database had been delivered by a 3rd party and was not under our direct control.

I was pretty certain that moving that table would relieve the symptons of contention on the aud$ segments, which were being exaggerated by the 8 RAC nodes.

The following test case indicates the differences. I have taken the code used from a site which discusses freelist contention and ASSM

Create a test table in  TEST a manually managed tablespace, run some throughput through and monitor through AWR.

Create a new table in an auto managed tablespace, repeat the tests and compare results

drop table test;
 create table test (
   x date,
   y char(255) default 'x'
 storage (freelists 1)
 tablespace test;

exec dbms_workload_repository.create_snapshot();

     l_job number;
     for i in 1 .. 15
         dbms_job.submit( l_job, 'do_insert;' );
     end loop;

select substr(job,1,4) "job",
        substr(schema_user,1,10) "user",
        substr(to_char(last_date,' hh24:mi'),1,16) "last date",
        substr(to_char(next_date,' hh24:mi'),1,16) "next date",
        substr(broken,1,2) "b",
        substr(failures,1,6) "failed",
        substr(what,1,32) "command"
   from dba_jobs;

The AWR top events from the first run were

Event                                 Waits     Time(s)   (ms)   time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
buffer busy waits                 1,876,080       2,873      2   63.7 Concurrency
enq: HW - contention                227,951       1,091      5   24.2 Configuration
DB CPU                                              469          10.4
db file sequential read             118,194         108      1    2.4 User I/O
log file switch (checkpoint in           60          10    166     .2 Configuration

followed by the second run using  the tablespace that was ASSM managed

Top 5 Timed Foreground Events

Event                                 Waits     Time(s)   (ms)   time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU                                            1,696          24.4
log file switch (checkpoint in        1,775         938    528   13.5 Configuration
db file sequential read             724,504         630      1    9.1 User I/O
log file switch completion            2,344         434    185    6.3 Configuration
buffer busy waits                   987,073         260      0    3.7 Concurrency

The overall result was that we had a similar number of logins, no busy segments on the aud$ table and no timeouts. Job done

Posted in 11g new features, Oracle | Tagged: , , | 6 Comments »

Serious BUG alert – RMAN Duplicate overwrites datafiles at original location

Posted by John Hallas on February 11, 2014

Please be aware of the serious bug identified with RMAN duplicate database on the same node with versions and

Bug 13498382  ORA-1122 ORA-1208 RMAN Duplicate overwrites datafiles at original location despite db_create_file_dest

This note gives a brief overview of bug 13498382.
The content was last updated on: 07-FEB-2014
Click here for details of each of the sections below.


Product (Component) Oracle Server (Rdbms)
Range of versions believed to be affected Versions BELOW 12.2
Versions confirmed as being affected
Platforms affected Generic (all / most platforms affected)


The fix for 13498382 is first included in

Interim patches may be available for earlier versions – click here to check.


Related To:


New name for datafile created during recover may not be populated during the switch and catalog steps of duplicate command.

You can be affected by this issue if you execute the following steps with RMAN :

1. Take a backup from database using RMAN
    backup database;
    sql 'alter system switch logfile'

2. A datafile is added to the source database, or new tablespace is created
    create tablespace TC;
    sql 'alter system switch logfile'

3. RMAN Duplicated is executed to another DB on the same host.

     connect target sys/pwd@DB1
     connect auxiliary sys/pwd;
     run   {
     set until sequence <seq#>;
     duplicate target database to DB2;

4. The DB is recovered past the point in time the datafile was added

5. The datafile is created to the original location, instead of created at DB_FILE_CREATE_DEST, overwriting the file at SOURCE

At SOURCE, we see the datafile going OFFLINE after a checkpoint

Read of datafile '+DG/db_unique_name/datafile/tc.1232.769779085' (fno 7) header failed with ORA-01208
Rereading datafile 7 header failed with ORA-01208
Errors in file /diag/rdbms/db_name/db_unique_name/trace/db_unique_name_ckpt_9604.trc:

ORA-63999: data file suffered media failure
ORA-01122: database file 7 failed verification check
ORA-01110: data file 7: '+DG/db_unique_name/datafile/tc.1232.769779085'
ORA-01208: data file is an old version - not accessing current version

 At TARGET / AUX, we see the datafile being created at original location

alter database recover logfile  '+DG/db_unique_name/archivelog/2011_12_13/thread_1_seq_934.1321.769779115'

Media Recovery Log +DG/db_unique_name/archivelog/2011_12_13/thread_1_seq_934.1321.769779115
File #7 added to control file as 'UNNAMED00007'. Originally created as: '+DG/db_unique_name/datafile/tc.1232.769779085'
Errors with log +DG/db_unique_name/archivelog/2011_12_13/thread_1_seq_934.1321.769779115
Recovery interrupted!

Posted in 11g new features | Tagged: , , , | 2 Comments »

Invisible Indexes – a real world example and a bit of LogMiner

Posted by John Hallas on February 11, 2014

We had a requirement to perform a one-off update on a table of around 1.2M rows. There were 11K statements similar to the one below, with varying values and predicates updating a variety of rows from 20 to 500. The potential elapsed time was around 44 hours due to the volume of full table scans involved.

The obvious thing to do was to create an index but this was production and it was urgent so no time to go through the full route to live test cycle.

This seemed a great opportunity to use an invisible index with the theory being that the optimiser would not use it for any other of the currently running work and only our session would see it.

This demo shows how it was tested and what the benefits were along with a couple of thoughts on invisible indexes.

I must admit I wish I could get a consistent method of showing explain plans, sometimes they come out lined up properly, other times they are a mess. This entry is somewhere in between.

Firstly lets run an example update statement to see what access path it uses

SQL> Update JOHN.TEST_TABLE Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

Elapsed: 00:00:00.46

| Id  | Operation            | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | UPDATE STATEMENT     |                      |    16 |   256 | 13696   (1)| 00:02:45 |       |       |
|   1 |  UPDATE              | TEST_TABLE |       |       |            |          |       |       |
|   2 |   PARTITION RANGE ALL|                      |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |
|*  3 |    TABLE ACCESS FULL | TEST_TABLE |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |

Create an index and re-run to see that the index will be used and the results are better, don’t forget  to gather stats.


execute dbms_stats.gather_table_stats(‘JOHN’, TEST_TABLE’,null,10,cascade=>true);

SQL> Update JOHN.TEST_TABLE Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

Elapsed: 00:00:00.01

| Id  | Operation         | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT  |                      |    27 |   432 |    20   (0)| 00:00:01 |
|   1 |  UPDATE           | TEST_TABLE |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| INDX_1               |    27 |   432 |     3   (0)| 00:00:01 |

Note the index can be made invisible from a command line statement but it does not seem possible if creating using the OEM12 gui

 alter index indx_1 invisible;

SQL> select index_name,visibility from user_indexes;


INDX_1                         INVISIBLE

Run the query to prove that the invisible index is not used by the optimizer

 SQL> Update JOHN.dwd_rtl_sl_line_item Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

| Id  | Operation            | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | UPDATE STATEMENT     |                      |    16 |   256 | 13696   (1)| 00:02:45 |       |       |
|   1 |  UPDATE              | TEST_TABLE |       |       |            |          |       |       |
|   2 |   PARTITION RANGE ALL|                      |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |
|*  3 |    TABLE ACCESS FULL | TEST_TABLE |    16 |   256 | 13696   (1)| 00:02:45 |     1 |   118 |

Now alter the session so only that session id can take advantage of the index


Session altered.

SQL>  Update JOHN.TEST_TABLE Set WK_WAC_RATE=0.5338 Where SKU_ITEM_KEY=4878 AND TXN_WK_KEY=20140127;

24 rows updated.

| Id  | Operation         | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT  |                      |    27 |   432 |    20   (0)| 00:00:01 |
|   1 |  UPDATE           | TEST_TABLE |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| INDX_1               |    27 |   432 |     3   (0)| 00:00:01 |

The code is running much more efficiently and yet we have not impacted the execution of any other sql that may be running against this table, although the index will need updating with every I,U,D statement, whether or not the index is visible  or not.

Moving to thoughts around how you can tell whether an index is visible or not and when it was made visible. USER_INDEXES will show the state of the index

SQL> select index_name,visibility from user_indexes;


INDX_1                         INVISIBLE

How do you know when it was made invisible/visible though. Does the LAST_DDL time tell you?

OBJECT_NAM OBJECT_TYPE         CREATED                  LAST_DDL                 VISIBILIT

———- ——————- ———————— ———————— ———

INDX_1     INDEX               10-FEB-14 08:38          10-FEB-14 08:38          INVISIBLE

SQL> alter index indx_1 visible;

 select object_name,object_type, to_char(CREATED,’DD-MON-YY HH24:MI’) Created,

to_char(LAST_DDL_TIME,’DD-MON-YY HH24:MI’) Last_DDL,  Visibility

from user_indexes U, user_objects O where o.object_name = u.index_name

and index_name  = ‘INDX_1’

OBJECT_NAM OBJECT_TYPE         CREATED                  LAST_DDL                 VISIBILIT

———- ——————- ———————— ———————— ———

INDX_1     INDEX               10-FEB-14 08:38          10-FEB-14 10:41          VISIBLE

So yes, if the last DDL on the index was to change it’s visibility then the time can be identified. However what the DDL was cannot be identified and an alternative option is to use logminer functionality.

Ensure supplementary logging is set, perform some actions and then log mine and see if you can identify them.




SQL> alter index john.indx_1 invisible;

Index altered.

SQL> alter index john.indx_1 visible;

Index altered.

SQL> alter index john.indx_1 invisible;

Index altered.

alter session set nls_date_format = ‘DD-MON-YYYY HH24:MI:SS’;

execute dbms_start_logmnr( –

starttime => ’10-Feb-2014 12:00:00′, –

endtime => ’10-Feb-2014 12:34:00′, –

options => dbms_logmnr.dict_fom_online_catalog + –


select operation, sql_redo,sql_undo from v$logmnr_contents

where seg_owner = ‘JOHN’ and seg_name = ‘INDX_1’ ;

select substr(sql_redo,1,100) sql_redo, to_char(timestamp,’DD-MON-RRRR HH24:MI:SS’) timestamp

from v$logmnr_contents

where seg_owner = upper(‘JOHN’)

and sql_redo like ‘%vis%’;



alter index john.indx_1 invisible;              10-FEB-2014 12:32:01

alter index john.indx_1 visible;                  10-FEB-2014 12:32:19

alter index john.indx_1 invisible;              10-FEB-2014 12:32:24

This last section on identifying when the index visibility was changed came from an Oracle-L discussion and the suggestion of log mining came from David Fitzjarrel. In fact it was because I had recently read the post that it gave me the idea to use an invisible index.

Posted in 11g new features, Oracle | Tagged: , , , , , , | 8 Comments » new features and a OPTIMIZER_DYNAMIC_SAMPLING change

Posted by John Hallas on December 9, 2013

As of 27th August  2013,, the final release of 11GR2 was made available – a new features document is available . I will give a quick bullet list of the new features and then discuss one very important one that is not mentioned.

  •  Oracle Data Redaction – provides a new ASO (cost £££) option to redact specified information, even at runtime, from within the database
  •  Trace File Analyzer and Collector, is a diagnostic collection utility to simplify diagnostic data collection on Oracle Clusterware, Oracle Grid Infrastructure  – looks well worth investigating
  •  RACcheck – The Oracle RAC Configuration Audit Tool – to perform regular health checks as well as pre- and post-upgrade best practices assessments. Only written for linux so needed adapting for HPUX and needs to have the root password – an issue for many DBAs
  •  Database Replay Support for Database Consolidation – run lots of scenarios- batch, online, monthly process all at the same time even though captured at different periods.
  •  Optimization for Flashback Data Archive History Tables – use the OPTIMIZE DATA clause when creating or altering a flashback data archive.

So the one that has not appeared in that list, probably because it is not a new feature as such is tan additional value for the init.ora parameter OPTIMIZER_DYNAMIC_SAMPLING. This comes into play when a table does not have any statistics and the parameter is enabled. The previous default setting of 2  was to use dynamic statistics if at least one table in the statement has no statistics and the number of blocks that the statistics would be based on would be 64 and the value could range between 2 and 10 , each value doubling  the number of blocks that will be sampled. The new value of 11  means that the optimizer will gather dynamic statistics automatically whenever the optimizer deems it necessary and based on the number of blocks it thinks appropriate.

My testing has shown a couple of anomalies between the two versions it exists on (, Read the rest of this entry »

Posted in 11g new features, 12c new features, Oracle | Tagged: , , , , , , | Leave a Comment »

Large CLOBs and attention to detail needed while reading Oracle Support notes.

Posted by John Hallas on October 3, 2013

This post was forwarded to me by Vitaly Kaminsky who did work with me but has now bettered himself elsewhere. He writes :-

I have recently been involved with performance tuning of a database layer for the major Dutch website which was preparing for the “crazy days” of sales on the 2nd to 5th of October.

The general setup was as follows:
2-node physical RAC cluster with fast storage layer running ORACLE 11gR2 SE and ASM.

The important bit above is the “SE” which stands for Standard Edition and implies no performance packs, no AWR, no SQL tuning and no partitioning.

The shop application makes heavy use of Oracle Text functionality and the purpose of the tuning was to ensure we can get 12000 orders per hour through the system. Each of the orders would create a single insert and numerous updates of a CLOB in XML format. This is actually the total info we managed to get from the vendor on what the application performance tuning should be focused on.

As expected, after the first test runs, it became apparent that the stumbling block was literally a block of CLOB storage.  When there was enough padding space, the application ran fine and then, suddenly, the database would grind to a halt with “enq: HW – contention” waits. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , , | Leave a Comment »

SQL Plan Management – how to really utilize and not just implement

Posted by John Hallas on August 28, 2013

The title of this post is intentionally slightly unclear and hopefully it is intrigued people to view the post and even better, add their comments.

SQL Plan Management has been around since 11G came out which is back in 2007. It does not require a tuning pack, so the package DBMS_SPM can be used without additional licensing but if the SQL Tuning advisor is used to generate new profiles via tuning task then that does require a tuning pack license.

There are plenty of  articles available to show how to  use SPM to manage a SQL statement and ensure it has the best execution plan but what I am intrigued by is how to use it on an ongoing basis across a number of databases and across all standard (non ad-hoc) code i.e. how to implement a site standard for the use of SPM which can both be applicable to those databases that have two dedicated DBAs monitoring activity continuously (type 1) , the databases that we all have that pretty much run by themselves and need little maintenance (type 3)  and the multiple systems that lie in between those two types (type 2).

A colleague, Steve Senior has produced a flow chart of how we might deliver SPM across all systems but the stumbling block is how we manage changes on an ongoing basis, both those delivered because changing statistics (derived from data changes)  require new execution plans and after code changes then new plans will probably need to be evolved, plus the inclusion of totally new SQL statements, perhaps based on new tables which have been added to the schema. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , , | 2 Comments »

Rebuild of standby using incremental backup of primary

Posted by John Hallas on March 18, 2013

I have a long to-do list of things I want to test out and one is rebuilding a standby by using an incremental backup from primary. Then along comes a note from my ex-colleague Vitaly Kaminsky  who had  recently been faced with the problem when a customer relocated two Primary 2-node RACs and a single node standby databases to a new location and just happened to start the standby databases in read-only mode. Vitaly tells the story :-

As you may know, the read-only mode will prevent any redo logs being applied to standby database, but on the surface everything looks OK – no errors and MRP0 process is running and showing “applying log” in  v$managed_standby.

The only problem is – the recovery is “stuck” on the last log the database was trying to apply before it was opened in read-only mode.

Unfortunately, the customer did not notice the omission for over 2 weeks and by the time I have had a chance to look at the environments, there were about 50G of redo logs accumulated for each and some of them were missing and/or corrupt which excluded the possibility of SCPing the logs over to the standby server and registering them with standby databases.

One of another factors which caused a lack of  attention to the standby databases falling behind is the absence of any error messages in the alert logs – every single log was shown as shipped and received.

In a case like this, the only option is to rebuild the standby database and in the past I did it using the traditional RMAN duplicate for standby routine.  However, in this particular case I had 2 databases to rebuild – one is small and another is large. The network between primary cluster and standby was slow as well.

For the small database I decided to use Grid Control GUI based wizard for creating the standby database and this process is quite straightforward and described in the documentation. For the large one, however, duplicating the database using RMAN would be too slow,there may have been a  performance degradation during the run and the maintenance window was too short for out-of-hours run.

This was a perfect case to try “incremental backup” approach. This method is described in a number of sources (if you Google it) but none of the “vanilla” cases worked for me.

I will not be listing the names and detailed output due to the production nature of the work – just the list of steps.

So, this is what I did at the end of the day:


Primary database can be single node or RAC and running OK.
No downtime of Primary is required.
All Dataguard settings are intact


  1. 1.       Get the latest SCN from standby:
 select to_char(current_scn) from v$database;

  1. 2.       Create incremental backup on Primary for all the changes since SCN on standby:
[oracle@primary backup]$ rman target /</pre>
connected to target database: PRIMARY (DBID=720063942)

RMAN> run

2> {

3> allocate channel d1 type disk;

4> allocate channel d2 type disk;

5> backup incremental from scn 10615562421 database format

6> '/tmp/backup/primary_%U';

7> release channel d1;

8> release channel d2;
<pre>9> }
  1. 3.       Create copy of control file on Primary:
 alter database create standby controlfile as ‘/tmp/backup/stby.ctl’;
  1. 4.       SCP the backup files and standby control file to the standby server. A little tip: if you copy the backup files to the directory with the same name (like /tmp/backup here), your controlfile will know about them and you can bypass the registration bit later on.
  1. 5.       The next step is to replace the standby control file with the new one. May sound simple, but this proved to be the trickiest part due to the fact that standby controlfile is OMF and in ASM.  You will need to use RMAN for the restore operation:

–          Switch the database to nomount, then:

restore controlfile from ‘/tmp/backup/stby.ctl';

–          Mount the database.

At this point you have the controlfile with the information about the files as they are on the Primary side, so, the next step is to register everything we have on Standby side:

catalog start with '+data/standby/';

Check the output and YES to register any reported standby files.

–          Shutdown immediate your standby instance.

 RMAN> switch database to copy;

RMAN> report schema;

On this stage you should have a nice and clean list of actual standby files.


Now we are ready to apply our differential backup to bring the standby in line with Primary:

RMAN> recover database noredo;

 Because the online redo logs are lost, you must specify the NOREDO option in the RECOVER command.

You must also specify NOREDO if the online logs are available but the redo cannot be applied to the incrementals.

If you do not specify NOREDO, then RMAN searches for redo logs after applying the incremental backup, and issues an error message when it does not find them.

When the recovery completes, you may start the managed recovery process again:

SQL> alter database recover managed standby database using current logfile disconnect;

Provided, all FAL settings are correct, your managed recovery will pick-up all logs generated on primary since the incremental backup and you will have fully synchronised standby again.

Posted in 11g new features, Oracle | Tagged: , , , | 4 Comments »


Get every new post delivered to your Inbox.

Join 298 other followers