Oracle DBA – A lifelong learning experience

Posts Tagged ‘dataguard’

Standby databases – a few of gotchas

Posted by John Hallas on October 27, 2011

All of these items refer to Dataguard standby databases on 11GR1 and the  active dataguard license comment is applicable in 11GR2 as well.

LGWR: Insufficient standby redo logfiles to archive

The first is an error I see on a few databases and had assumed that it was to do with insufficient standby log files. The recommendation is that there should be at least one more than the normal redo log files so that applying the redo will not be delayed on standby, which would be critical in maximum protection mode when the primary would wait until log was applied remotely.

Wed Oct 19 11:58:13 2011
LGWR: Insufficient standby redo logfiles to archive thread 1 sequence 47505
LGWR: Standby redo logfile selected for thread 1 sequence 47505 for destination LOG_ARCHIVE_DEST_2
Thread 1 advanced to log sequence 47505 (LGWR switch)
Current log# 20 seq# 47505 mem# 0: +DATA/SID/onlinelog/group_20.2083.735986363
Current log# 20 seq# 47505 mem# 1: +FRA/SID/onlinelog/group_20.1668.735986365

MyOracleSupport document  798361.1 “LGWR: Insufficient standby redo logfiles to archive” Messages in Primary Database

This states:- Bug 8358103 – will be fixed in 10.2.0.5 and 11.1.0.8. “Insufficient standby redo logfiles” message is misleading as it is followed by “Standby redo logfile selected”, which indicates the standby redo logs is selected and being used. You can safely ignore these messages.  Note that 11.1.0.8 is unlikely to happen as 11.1.0.7 is the terminal release

Lag apply alert on standby database

The second is something I came across yesterday after we had performed a switchover from primary to standby. Normally we set the lag apply metric in OEM to alert us if standby gets more than 900 seconds behind). This is set in OEM for each database (unless you want to issue a modified template)

Database > Metric and Policy Settings > then first metric is Apply Lag (seconds) In order to see graph, you can go to all metrics > Data Guard Performance > Apply Lag (Seconds).

The alert is set on standby and yet when a switchover is invoked (I am assuming either by DG broker or manually as we did it, then we would expect that alert to be still in place. However it is removed and has to be re-applied.

Block change tracking on a standby requires active dataguard license

From 11GR1 onwards running a standby database with the block change tracking file enabled (which helps make incremental backups much faster) requires an active dataguard license to be in place. Whilst I think this policy is very mean spirited, especially when a license was not required in a similar situation on 10G it is a requirement and should not be ignored.

Hope these heads-up are useful

Posted in Oracle | Tagged: , , , , , , , | Leave a Comment »

Using a service name with load balancing and standby databases

Posted by John Hallas on January 26, 2010

This one is for Coskan

At the UKOUG SIG last week when I was presenting, Coskan asked a question ‘do you use RAC services’. I prevaricated a bit because I thought that there must be more to the question than a simple yes or no.

Anyway, the answer is yes and I will post a trigger that we use when we switch from primary to Dataguard standby Read the rest of this entry »

Posted in Oracle | Tagged: , , , | 5 Comments »

Configuring Dataguard Broker in 11G

Posted by John Hallas on October 12, 2009

DataGuard Broken?

A guest blog from colleague Simon Ryan

Last year we did some sandpit work on implementing DataGuard.  This was 10g on a Windows platform.  When it came to the broker we didn’t have much luck.  At the UKOUG Michael Moller referred to the Broker as DataGuard Broken (DataGuard 11g New Features).  From the audience reaction this seemed to be a popular perception.

Since then we’ve successfully implemented DataGuard on most of our Production databases (both 10g and 11g) on HP-UX.  However, we’ve still not embraced the Broker!  The ‘broken’ tag still taints its reputation.  A recent visit by Oracle attempted to assure us that the Broker with 11g is much improved.

The following is a basic guide to implementing the broker and our findings:

Our Setup

o       Primary – Two node RAC Cluster (XXXPRD1A)

o       Physical Standby – Single node (XXXPRD1B)

o       HP-UX 11.31 Itanium

o       Database using ASM (both 11.1.0.7).

o       DataGuard running in Max Availability with Real Time Apply.

Configuration

Add the following entries to tnsnames.ora on all instances:

 

XXXPRD1A_DGMGRL =

   (DESCRIPTION=

     (ADDRESS=(PROTOCOL=tcp)(HOST=xxxem03-vip)(PORT=1526))

     (CONNECT_DATA=

       (SERVICE_NAME=XXXPRD1A_DGMGRL)

     )

    )

 

XXXPRD2A_DGMGRL =

   (DESCRIPTION=

     (ADDRESS=(PROTOCOL=tcp)(HOST=xxxem04-vip)(PORT=1526))

     (CONNECT_DATA=

       (SERVICE_NAME=XXXPRD2A_DGMGRL)

     )

    )

 

XXXPRD1B_DGMGRL =

   (DESCRIPTION=

     (ADDRESS=(PROTOCOL=tcp)(HOST=xxxem01-vip)(PORT=1526))

     (CONNECT_DATA=

       (SERVICE_NAME=XXXPRD1B_DGMGRL)

     )

    )

 

Add the following entry to SID_LIST of your listener in listener.ora on all instances:

 

 (SID_DESC = 

      (ORACLE_HOME = /app/oracle/product/11.1.0/db_1)

      (SID_NAME = XXXPRDxx)

      (GLOBAL_DBNAME = XXXPRDxx_DGMGRL)

    )

 Stop/start listener.

 Test you can now connect:

 

 sqlplus sys@XXXPRD2A_dgmgrl as sysdba

  

Make the following parameter changes:

 

 XXXPRD1A:
<pre>
 

alter system set dg_broker_start=true scope=spfile;

alter system set dg_broker_config_file1='+DATA/XXXPRD1A/broker1.dat' scope=spfile;

alter system set dg_broker_config_file2='+FRA/XXXPRD1A/broker2.dat' scope=spfile;

 XXXPRD1B:

alter system set dg_broker_start=true scope=spfile;

alter system set dg_broker_config_file1='+DATA/XXXPRD1B/broker1.dat' scope=spfile;

alter system set dg_broker_config_file2='+FRA/XXXPRD1B/broker2.dat' scope=spfile; 

 Stop/Start the databases:

 

 Stop XXXPRD1A:
<pre>
 

srvctl stop database -d XXXPRD1A -o immediate

 Stop XXXPRD1B:

shutdown immediate

 Start XXXPRD1B:

startup mount

 Start XXXPRD1A:

srvctl start database -d XXXPRD1A

 

 Check for the broker process:

 

 XXXPRD1A:
<pre>
 

ps –ef | grep ora_dmon_XXXPRD1A

XXXPRD1B:

ps –ef | grep ora_dmon_XXXPRD1B

 

Create DG Broker Configuration:

 

1 Primary:

dgmgrl

connect sys

<em>enter sys password</em>

 

create configuration XXXPRD1A_DGCONFIG as primary database is “XXXPRD1A” connect identifier is XXXPRD1A;

Configuration "XXXPRD1A_dgconfig" created with primary database "XXXPRD1A"

 

add database “XXXPRD1B” as connect identifier is XXXPRD1B maintained as physical;

Database "XXXPRD1B" added

 

enable configuration;

Enabled.

 

show configuration;

 

Configuration

  Name:                XXXPRD1A_dgconfig

  Enabled:             YES

  Protection Mode:     MaxAvailability

  Databases:

    XXXPRD1A - Primary database

    XXXPRD1B - Physical standby database

 

Fast-Start Failover: DISABLED

 

Current status for "XXXPRD1A_dgconfig":

SUCCESS

 

 Check Broker Files Exist:

Primary and Standby ASM:

Check they are there.

 

Show Database:

 Primary: 

show database verbose “XXXPRD1A”;

 

Database

  Name:            XXXPRD1A

  Role:            PRIMARY

  Enabled:         YES

  Intended State:  TRANSPORT-ON

  Instance(s):

    XXXPRD1A

    XXXPRD2A

 

  Properties:

    DGConnectIdentifier             = 'XXXPRD1A'

    ObserverConnectIdentifier       = ''

    LogXptMode                      = 'SYNC'

    DelayMins                       = '0'

    Binding                         = 'OPTIONAL'

    MaxFailure                      = '0'

    MaxConnections                  = '1'

    ReopenSecs                      = '300'

    NetTimeout                      = '30'

    RedoCompression                 = 'DISABLE'

    LogShipping                     = 'ON'

    PreferredApplyInstance          = ''

    ApplyInstanceTimeout            = '0'

    ApplyParallel                   = 'AUTO'

    StandbyFileManagement           = 'AUTO'

    ArchiveLagTarget                = '0'

    LogArchiveMaxProcesses          = '10'

    LogArchiveMinSucceedDest        = '1'

    DbFileNameConvert               = 'XXXPRD1B, XXXPRD1A'

    LogFileNameConvert              = 'XXXPRD1B, XXXPRD1A'

    FastStartFailoverTarget         = ''

    StatusReport                    = '(monitor)'

    InconsistentProperties          = '(monitor)'

    InconsistentLogXptProps         = '(monitor)'

    SendQEntries                    = '(monitor)'

    LogXptStatus                    = '(monitor)'

    RecvQEntries                    = '(monitor)'

    HostName(*)

    SidName(*)

    StaticConnectIdentifier(*)

    StandbyArchiveLocation(*)

    AlternateLocation(*)

    LogArchiveTrace(*)

    LogArchiveFormat(*)

    LatestLog(*)

    TopWaitEvents(*)

    (*) - Please check specific instance for the property value

 

Current status for "XXXPRD1A":

SUCCESS

 

Standby:

show database verbose “XXXPRD1B”;

 

Database

  Name:            XXXPRD1B

  Role:            PHYSICAL STANDBY

  Enabled:         YES

  Intended State:  APPLY-ON

  Instance(s):

    XXXPRD1B

 

  Properties:

    DGConnectIdentifier             = 'XXXPRD1B'

    ObserverConnectIdentifier       = ''

    LogXptMode                      = 'SYNC'

    DelayMins                       = '0'

    Binding                         = 'OPTIONAL'

    MaxFailure                      = '0'

    MaxConnections                  = '1'

    ReopenSecs                      = '60'

    NetTimeout                      = '30'

    RedoCompression                 = 'DISABLE'

    LogShipping                     = 'ON'

    PreferredApplyInstance          = ''

    ApplyInstanceTimeout            = '0'

    ApplyParallel                   = 'AUTO'

    StandbyFileManagement           = 'AUTO'

    ArchiveLagTarget                = '0'

    LogArchiveMaxProcesses          = '10'

    LogArchiveMinSucceedDest        = '1'

    DbFileNameConvert               = 'XXXPRD1A, XXXPRD1B'

    LogFileNameConvert              = 'XXXPRD1A, XXXPRD1B'

    FastStartFailoverTarget         = ''

    StatusReport                    = '(monitor)'

    InconsistentProperties          = '(monitor)'

    InconsistentLogXptProps         = '(monitor)'

    SendQEntries                    = '(monitor)'

    LogXptStatus                    = '(monitor)'

    RecvQEntries                    = '(monitor)'

    HostName                        = 'xxxem01'

    SidName                         = 'XXXPRD1B'

    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxem01)(PORT=1526))(CONNECT_DATA=(SERVICE_NAME=XXXPRD1B_DGMGRL)(INSTANCE_NAME=XXXPRD1B)(SERVER=DEDICATED)))'

    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'

    AlternateLocation               = ''

    LogArchiveTrace                 = '0'

    LogArchiveFormat                = '%t_%s_%r.dbf'

    LatestLog                       = '(monitor)'

    TopWaitEvents                   = '(monitor)'

 

Current status for "XXXPRD1B":

SUCCESS

 Ensure logs are still being transported and applied

All still works!

 

What does it give us?

  • Improved monitoring in OEM.
  • Simpler administration via DGMGRL or OEM i.e. changing Protection Mode.
  • Easy switchover/failover
    • DGMGRL: switchover to “XXXPRD1A”
    • Click of a button in OEM.
  • Fast-Start Failover (with an observer).

 

Conclusions

The broker configuration was relatively painless and I have successfully performed a switchover via both DGMGRL and OEM.  I’ve also used the various OEM functions without issue too.  I’m uneasy about performing such tasks via OEM but then most DBA’s prefer to be at the helm via the command line rather than clicking a button (that’s too easy!). 

If you plan to use DataGuard Broker then embrace it and make sure you don’t alter parameters behind the covers.  It does that for you and it doesn’t like you meddling!  I also think it’s a good idea to setup DataGuard initially without the broker and then configure it once you’re happy that DataGuard is working.

So what’s made the difference since our experience last year?  UNIX v Windows (maybe), 10g v 11g (probably).  A big factor would also have to be our experience level.  In the last year we’ve increased our DataGuard count from zero to ten so we are therefore much more comfortable in the DataGuard world.

When a sand-pit database becomes available I’d like to try and setup the observer and test Fast-Start Failover.  I’m inclined to remove the ‘broken’ tag but I’ll reserve judgment until I’ve tested the remaining functions.

Simon Ryan

Posted in 11g new features, Oracle | Tagged: , , , , , , | 6 Comments »

Managing dataguard – monitoring scripts

Posted by John Hallas on July 16, 2009

I am providing three scripts which I find useful in ensuring that the standby database is keeping up with the primary database in Dataguard setup.

@last

LOGS             TIME
---------------- ------------------
Last applied   :  16-JUL-09:09:24:16
Last received :  16-JUL-09:09:28:36
    select 'Last applied  : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time
    from v$archived_log
    where sequence# = (select max(sequence#) from v$archived_log where applied='YES')
    union
    select 'Last received : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time
    from v$archived_log
   where sequence# = (select max(sequence#) from v$archived_log);

@dg_stats

NAME                   VALUE                  UNIT
---------------------- ---------------------- -----------------------------------
apply finish time      +00 00:02:07.2         day(2) to second(1) interval
apply lag              +00 00:01:59           day(2) to second(0) interval
estimated startup time 16                     second
standby has been open  N
transport lag          +00 00:00:00           day(2) to second(0) interval

                                              Time Computed: 16-JUL-2009 09:33:16

   

select
    NAME Name,
    VALUE Value,
    UNIT Unit
    from v$dataguard_stats
    union
    select null,null,' ' from dual
    union
    select null,null,'Time Computed: '||MIN(TIME_COMPUTED)
   from v$dataguard_stats;

A final option is to use v$standby_log

@last_redo

Redo onsite
--------------------
16-JUL-2009 09:42:44

   

select to_char(max(last_time),'DD-MON-YYYY HH24:MI:SS') "Redo onsite"
     from v$standby_log

None of them that clever, but all of them are useful.  Thanks to colleague Simon Ryan for pulling them together.

Posted in Oracle | Tagged: , , , , , | Leave a Comment »

How not to set up remote log shipping – Dataguard and Streams

Posted by John Hallas on April 11, 2008

I was trying to set up a Streams archived_log downstream capture between two 11.1.0.6 databases and I was having problems in getting the archive log files to transport over to the target database where they will be read and applied.

First steps was to change log_archive_dest to use log_archive_dest_1 and log_archive_dest_2

log_archive_dest_1 = ‘location=/u00/oradata/dbs1/archive’

log_archive_dest_2 = ’service=dbs2 noregister template=/export3/u01/oradata/dbs1/archive/%t_%s_%r.dbf’

I ensured that the passwd file is set up on both sides and the sys password is the same and that remote_login_passwordfile is set to EXCLUSIVE.

In V$archive_dest I see an error of ORA-16191

Primary log shipping client not logged on standby

Cause

An attempt to ship redo to standby without logging on to standby or with invalid user credentials.

Action

Check that primary and standby are using password files and that both primary and standby have the same SYS password. Restart primary and/or standby after ensuring that password file is accessible and REMOTE_LOGIN_PASSWORDFILE initialization parameter is set to SHARED or EXCLUSIVE.

I don’t think I need to make any changes on the target database to actually get the files across as I am not trying to read them as yet.

The service name works and I can connect remotely and I have re-created the password file as well

April Sims offered the following advice

I have encountered this in a DATA GUARD environment as well, see fix below. This also fixed problems with Grid Control in 11g.

Data Collected

==============

ORA-16191: Primary log shipping client not logged on standby

Support Update

==============

The issue seems to be because of password mismatch between primary and standby.

The password file has to have identical authentication info for both primary and standby

If you change or create the password file using orapwd for SYS,it will not work in 11G.

Action Plan

===========

Turn off the case sensitivity option by changing init.ora parameter SEC_CASE_SENSITIVE_LOGON=FALSE.

Create password files on both servers using the same password and pass ignorecase=Y to orapwd utility.

Make sure you use identical options during password file creation for both primary and standby

However I was still struggling with a ORA-1031 permissions problem as well as the ORA-16191 error.

I finally resolved the issue by taking the following actions

I amended my source database spfile to have the following entries

LOG_ARCHIVE_DEST_2=’SERVICE=DBS2.NET ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)

TEMPLATE=/usr/oracle/log_for_dbs1/dbs1_arch_%t_%s_%r.log

DB_UNIQUE_NAME=dbs2′ – this line was new

LOG_ARCHIVE_CONFIG=’DG_CONFIG=(dbs1,dbs2)’ — new entry, using the DB_UNIQUE_NAME of both databases

The target database had an spfile with the following lines

*.LOG_ARCHIVE_DEST_2= ‘LOCATION=/export3/u01/oradata/dbs1/archive MANDATORY VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE) ‘

LOG_ARCHIVE_CONFIG=’DG_CONFIG=(dbs1,dbs2)’ — new entry, using the DB_UNIQUE_NAME of both databases

I still got the ORA-1031 error so I rebuilt the remote password file again and bounced the databases and everything sprung into life.

I think the key thing to do is to ensure that you can connect as sysdba from both databases to each other. If not then create/recreate the password file, bounce the databases and then try again. Once that is working it is time to start changing the spfile options to enable the second archive log destination.

Note that REMOTE_ARCHIVE_ENABLE parameter in 10g has been deprecated in 11g and replaced by LOG_ARCHIVE_CONFIG

Posted in Oracle | Tagged: , , , , , , , | 4 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 202 other followers