Oracle9i Data Guard Configuration Example - (Physical, Maximum Performance Mode)

by Jeff Hunter, Sr. Database Administrator
Last modified on: 24-Aug-2005


Contents

  1. Overview
  2. Introduction to Data Guard
  3. Hardware and OS Configuration
  4. Configuring Nodes for Remote Access
  5. Configuring the Primary Database
  6. Configuring the Standby Database
  7. Start Remote Archiving
  8. Verifying the Physical Standby Database
  9. Database Maintenance (Switchover, Failover, etc)


Overview

Oracle Data Guard (known as Oracle Standby Database prior to Oracle9i), provides organizations with high availability, data protection, and disaster recovery for enterprise databases with extraordinary ease of use. The DBA has many services included with Oracle Data Guard for creating, maintaining, managing, and monitoring one or more standby database instances. Oracle Data Guard provides functionality to enable enterprise data systems to survive both data corruption as well as major disasters.

This article provides detailed instructions and notes for creating and configuring a physical standby database from a primary database in Oracle9i Release 2 (9.2). To keep the article simple, I will be using a Protection Mode of Maximum Performance. (Maximum Performance is the default protection mode.) All configuration parameters related to the Oracle instance and networking will be discussed as well as how to place the standby database in Managed Recovery Mode.



Introduction to Data Guard

The Standby Database feature of Oracle was first introduced in Oracle Release 7.3. Since then Oracle has added many new features to the standby database option, making it an excellent choice for your high availably requirements.

With the introduction of Oracle9i, Oracle has since then changed the name of Standby Database to Oracle Data Guard. The Data Guard software product in Oracle9i is used to maintain a standby database, or real-time copy of a primary database. When the standby database site hosted in a different geographical location than the primary site, it provides for an excellent High Availability (HA) solution. When configuring a standby database configuration, the DBA should always attempt to keep the primary and standby database sites identical as well as keeping the physical location of the production database transparent to the end user. This allows for an easy switchover scenarios for both planned and unplanned outages. When the secondary (standby) site is identical to the primary site, it allows predictable performance and response time after failing over (or switching over) from the primary site.

Terms or Options to know
These can be found in the Oracle documentation in Chapter 5 of the Data Guard Concepts and Administration Manual.
• AFFIRM assures that archive logs are written to disk, primary or standby.
• MANDATORY assures that redo logs are not overwritten until archive logs are successfully created. This should probably only apply to the primary database.
• REOPEN=30 means that there will be a 30 second delay until ARCn and/or LGWR processes try again on a MANDATORY destination which failed.
• DELAY is in minutes and does not stop the copy of an archive log file to a standby server but the application of redo on the standby after copying the archive log to the standby. This will not help primary database performance.
• Using ARCH instead of LGWR for the second standby database may help primary database performance but smaller sized log files would probably be required. SYNC=PARALLEL applies to LGWR only. Using ARCH waits for a switch on the primary, LGWR copies entries to a standby archive log, applied only at switch. ARCH will copy and apply at switch. LGWR is more efficient since it writes redo entries to all standby databases at once but a primary and two standby databases could possibly cause a performance issue for the primary database, possibly but unlikely! Additionally multiple archiver processes can be created on the primary database. Increase the value of the LOG_ARCHIVE_MAX_PROCESSES parameter to start additional archiver processes. The default on my machine appears to be 2 and not 1 as stated in the manuals; probably because I have two standby databases.
• The ARCHIVE_LAG_TARGET parameter could be used to increase the frequency of log switches, thus sending less data to the standby databases more often.

Check for Unsupported Objects
Now let’s check for objects and attributes which are unsupported on a logical standby database. Application objects such as tables could be a problem.
SELECT * FROM DBA_LOGSTDBY_UNSUPPORTED ORDER BY owner,table_name,column_name;

Now check for missing primary keys. Application tables without unique primary keys will require them, as rows will not be identifiable in the logical standby database for update by SQL Apply. Drop any objects listed or create primary keys for them.
SELECT OWNER, TABLE_NAME, BAD_COLUMN FROM DBA_LOGSTDBY_NOT_UNIQUE;


Setting the Level of Protection
This is achieved by using the following command syntax executed on the primary database.

ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE
{ PROTECTION | AVAILABILITY | PERFORMANCE };

The protection mode can be found by executing this query. PERFORMANCE is the default.
SELECT name, protection_mode, protection_level FROM v$database;

NAME PROTECTION_MODE PROTECTION_LEVEL
--------- -------------------- --------------------
STBY MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE

-No Data Loss Mode. The PROTECTION mode applies only to physical standby databases, using LGWR SYNC and will shutdown the primary database if no standby database can be written to.
-Minimal Data Loss. The AVAILABILITY mode prevents a transaction committing on the primary until all redo entries are written to at least one standby database. SYNC transport is required and this option is available to both logical and physical standby type databases. Unlike PROTECTION mode, which shuts down the primary database in the event of failure to pass redo entries to the standby, this mode simply lowers the protection mode to PERFORMANCE until the error is corrected.
- No Data Divergence. PERFORMANCE mode is the default setting and available for both physical and logical standby type databases. A transaction will commit to the primary before all redo entries are written to any standby database.

To ensure that minimal data loss will be encountered execute this command on the primary database. The database must be in mounted exclusive mode to execute this command.
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;



Configuration

Now, let's take a look at the Oracle database configuration and parameters we will be using for our physical standby database configuration.

Primary Database
Oracle Release: Oracle9i Release 2 - (9.2.0.6.0)
Machine Name: vmlinux3.idevelopment.info
Operating System: Red Hat Linux 3 - (CentOS 3.4)
Oracle SID: TESTDB
Instance Service Names: TESTDB.IDEVELOPMENT.INFO
TNS Service Name: TESTDB_VMLINUX3.IDEVELOPMENT.INFO
Standby Database
Oracle Release: Oracle9i Release 2 - (9.2.0.6.0)
Machine Name: vmlinux4.idevelopment.info
Operating System: Red Hat Linux 3 - (CentOS 3.4)
Oracle SID: TESTDB
Instance Service Names: TESTDB.IDEVELOPMENT.INFO
TNS Service Name: TESTDB_VMLINUX4.IDEVELOPMENT.INFO





Configuring Nodes for Remote Access
One other note about my configuration. I have both of my Linux servers configured with an .rhosts file for the oracle user in order to allow the use of the r* commands (like rcp). This allows me to expedite the initial copying of database files from the primary host to the standby host by not requiring me to manually login (FTP) with a username and password. Although it is only necessary to configure the standby database server with an .rhosts file (since it is the standby database server that needs to identify which servers will be trusted), I like to configure both nodes for remote access.

The rsh daemon on the standby database server validates UNIX users using the /etc/hosts.equiv file or the .rhosts file found in the user's (oracle's) home directory.

The use of rcp (and any other of the r* commands) is not required for normal Data Guard operation. However rcp does expedite the initial copying of database files from the primary host to the standby host! After copying all database files from the primary database server to the standby database server, this feature can be disabled.

Before configuring the .rhosts file, we must first make sure that we have the rsh RPMs installed on each node in the RAC cluster:

# rpm -q rsh rsh-server
rsh-0.17-17
rsh-server-0.17-17
From the above, we can see that we have the rsh and rsh-server installed.

If rsh is not installed, run the following command from the CD where the RPM is located:
# su -
# rpm -ivh rsh-0.17-17.i386.rpm rsh-server-0.17-17.i386.rpm

To enable the "rsh" service, the "disable" attribute in the /etc/xinetd.d/rsh file must be set to "no" and xinetd must be reloaded. This can be done by running the following commands on both nodes in the Data Guard configuration:

# su -
# chkconfig rsh on
# chkconfig rlogin on
# chkconfig rsync on
# chkconfig rexec on
# service xinetd reload
Reloading configuration: [ OK ]

Here is a copy of the .rhosts file I have configured on both of my Linux servers for the oracle user account. This file should reside in the $HOME directory for the oracle user account:

Example .rhosts File
+vmlinux3.idevelopment.info oracle
+vmlinux4.idevelopment.info oracle

After configuring the .rhost file on both computers, I like to perform a simple test from each machine to ensure that the configuration is valid:

FROM vmlinux3 MACHINE

$ id
uid=173(oracle) gid=115(dba) groups=115(dba)

$ rsh vmlinux4 hostname
vmlinux4


FROM vmlinux4 MACHINE

$ id
uid=173(oracle) gid=115(dba) groups=115(dba)

$ rsh vmlinux3 hostname
vmlinux3


Configuring the Primary Database
Let's start by configuring and preparing our primary database.

Although not required, I have the primary database configured with an SPFILE instead of the traditional text initialization parameter file.

  1. Enable Archiving and Define a Local Archiving Destination

    The primary database must be in archivelog mode, configured for automatic archiving and that you have at least one local archiving destination defined. To verify that the database meets all of these requirements, you can use the following command:

    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination /orabackup/archive/TESTDB
    Oldest online log sequence 3
    Next log sequence to archive 5
    Current log sequence 5

    In the statement above, we can see that Archive Log Mode is enabled and that my archive destination for local archive redos is /orabackup/archive/TESTDB. If it turns out that Archive Log Mode is not enabled for your primary database, you can enable it using the following. First, you will need to define the following instance variables:
    log_archive_dest_1        = 'location=/orabackup/archive/TESTDB MANDATORY'
    log_archive_dest_state_1 = 'enable'
    log_archive_format = 'arch_t%t_s%s.dbf'
    log_archive_start = true
    Next, place the primary database in archive log mode:
    SQL> shutdown immediate;
    SQL> startup mount;
    SQL> alter database archivelog;
    SQL> alter database open;

  2. Enable Forced Logging

    The following step is optional but highly recommended as it could save the DBA considerable time when disaster recovery needs to be implemented and you have operations that are performing nologging operations.

    You should place the primary database in FORCE LOGGING mode. This option is a new feature with Oracle Release 9.2 and will override any user transaction that attempts to specify NOLOGGING. If a user were able to perform NOLOGGING operations on the primary database would mean that some changes would not be fully logged within the redo stream to maintain the standby database. Any unlogged operations would invalidate the standby database and would require substantial DBA intervention in order to manually propagate unlogged operations. By placing the primary database in force logging mode, nologging operations are still permitted to run without error, but the changes will will be placed into the redo stream anyways. You can use the following statement to place the primary database in force logging mode:

    SQL> alter database force logging;

    Database altered.

    The above statement may take a substantial amount of time to complete. The database must want for all unlogged direct write I/O operations to finish.

    Also, if the database is already in force logging mode, you will receive the following warning:

    SQL> alter database force logging;
    alter database force logging
    *
    ERROR at line 1:
    ORA-12920: database is already in force logging mode
    Use the following query to determine the logging mode of the database:
    SQL> select force_logging from v$database;

    FORCE_LOGGING
    -------------
    YES

  3. Create a Password File

    A new feature to Oracle Data Guard is log transport security and authentication. It is manditory that all databases in a Data Guard configuration be configured with a password file. Also, the password for the SYS database account must be identical on every system for log transports to function. If your primary database is already configured with a password file, you can move on to the next step. If you need to configure a password file, use the following:

    $ cd $ORACLE_HOME/dbs
    $ orapwd file=orapwTESTDB password=change_on_install
    Once the password file is created, you will need to configure the instance to use it by setting the following instance parameter in NOMOUNT mode:
    SQL> shutdown immediate
    SQL> startup nomount
    SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;
    SQL> alter database mount;
    SQL> alter database open;
  4. Set Initialization Parameters on the Primary Database

    We now need to configure several initialization parameters on the primary database. Most of the configuration options for Data Guard are implemented as initialization parameters for the Oracle instance. When the primary and standby host are identical (memory, disks, CPU, etc.) the initialization file for both databases should be almost identical - with the exception of four key parameters. This greatly simplifies role reversal of the primary and standby database. The four parameters that need to be set on the standby databasse different from the primary database are:

    FAL_CLIENT
    FAL_SERVER
    LOG_ARCHIVE_DEST_2
    LOG_ARCHIVE_DEST_STATE_2
    In this step, I provide an overview of the parameters that are important for a successful standby database configuration for the primary database:
    ...
    archive_lag_target = 0
    compatible = '9.2.0.6.0'
    control_file_record_keep_time = 7
    control_files = '/u03/app/oradata/TESTDB/control01.ctl',
    '/u04/app/oradata/TESTDB/control02.ctl',
    '/u05/app/oradata/TESTDB/control03.ctl'
    db_name = 'TESTDB'
    dg_broker_start = true
    fal_client = 'TESTDB_VMLINUX3'
    fal_server = 'TESTDB_VMLINUX4'
    instance_name = 'TESTDB'
    local_listener = ''
    log_archive_dest_1 = 'location=/orabackup/archive/TESTDB mandatory'
    log_archive_dest_2 = 'service=TESTDB_VMLINUX4 optional reopen=15'
    log_archive_dest_state_1 = 'enable'
    log_archive_dest_state_2 = 'defer'
    log_archive_format = 'arch_t%t_s%s.dbf'
    log_archive_start = true
    remote_archive_enable = true
    remote_login_passwordfile = exclusive
    service_names = 'TESTDB.IDEVELOPMENT.INFO'
    standby_archive_dest = '/orabackup/archive/TESTDB'
    standby_file_management = 'auto'
    ...

    Notice that I set the log_archive_dest_state_2 to defer. I like to keep the state of this destination set to defer until the standby database has been created and in the MOUNT stage.

  5. Take Cold Backup of the Primary Database

    We now need to take a backup of the primary database that can be used to copy over to the standby database server. This can be either a cold or hot (online) backup of the database. I prefer to perform a cold backup for this example. Keep in mind that you will need to have all of the archived redo logs from the primary database in order to bring the standby database to a consistent state.

    To perform our cold backup, we will need to bring down the primary database and perform a backup of all database files.

    You only need to backup database files - there is no need to backup online redo log files or control files.

    To determine the database files that need to be backed up, you can query V$DATAFILE view. For example:

    SQL> select name from v$datafile;

    NAME
    -------------------------------------------
    /u06/app/oradata/TESTDB/system01.dbf
    /u06/app/oradata/TESTDB/undotbs1_01.dbf
    /u06/app/oradata/TESTDB/cwmlite01.dbf
    /u06/app/oradata/TESTDB/drsys01.dbf
    /u06/app/oradata/TESTDB/indx01.dbf
    /u06/app/oradata/TESTDB/odm01.dbf
    /u06/app/oradata/TESTDB/tools01.dbf
    /u06/app/oradata/TESTDB/users01.dbf
    /u06/app/oradata/TESTDB/xdb01.dbf
    /u06/app/oradata/TESTDB/example01.dbf
    /u06/app/oradata/TESTDB/perfstat01.dbf

    11 rows selected.
    We now shutdown the primary database:
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    After closing the primary database, copy all database files to a temporary location. For me, this step is easy since all of my database files are on /u06 and I need to create the directorys /u03 through /u05:
    $ rsh vmlinux4 mkdir -p /u03/app/oradata/TESTDB
    $ rsh vmlinux4 mkdir -p /u04/app/oradata/TESTDB
    $ rsh vmlinux4 mkdir -p /u05/app/oradata/TESTDB
    $ rcp -r /u06/app/oradata/TESTDB vmlinux4:/u06/app/oradata
    Once all database files are copied to their new location (or to a temporary location), you should now bring up the primary database:
    SQL> startup open
    ORACLE instance started.

    Total System Global Area 252777712 bytes
    Fixed Size 451824 bytes
    Variable Size 218103808 bytes
    Database Buffers 33554432 bytes
    Redo Buffers 667648 bytes
    Database mounted.
    Database opened.
  6. Create a Controlfile for the Standby Database

    After bringing up the primary database, create a standby control file for the standby database, as follows:

    SQL> alter database create standby controlfile as '/u01/app/oracle/control01.ctl';

    Database altered.

    The controlfile must also be created after the last time stamp for the backup datafiles.

    You can also create the standby controlfile from within RMAN as follows:
    RMAN> backup current controlfile for standby;

    Recovery Manager: Release 9.2.0.6.0 - Production

    Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

    connected to target database: TESTDB (DBID=2348798369)

    RMAN> backup current controlfile for standby;

    Starting backup at 23-AUG-05
    using target database controlfile instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=16 devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including standby controlfile in backupset
    channel ORA_DISK_1: starting piece 1 at 23-AUG-05
    channel ORA_DISK_1: finished piece 1 at 23-AUG-05
    piece handle=/orabackup/rman/TESTDB/backup_db_TESTDB_S_17_P_1_T_567125661 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    Finished backup at 23-AUG-05

    Starting Control File and SPFILE Autobackup at 23-AUG-05
    piece handle=/orabackup/rman/TESTDB/c-2348798369-20050823-00 comment=NONE
    Finished Control File and SPFILE Autobackup at 23-AUG-05

  7. Prepare Initialization Parameter File for the Standby Database

    As noted in the previous section, I indicated that I am using an SPFILE for the primary database (and will be using one for the standby database once it is created). In this section, we will want to create a traditional text initialization parameter file from the server parameter file used by the primary database. We can then copy this text file to the standby location.

    SQL> create pfile='/u01/app/oracle/initTESTDB.ora' from spfile;

    File created.

    When configuring the standby database, we will be converting this file back to a server parameter file (SPFILE) after it is modified to contain the parameter values appropriate for use with the physical standby database.

  8. Copy All Files from the Primary Host to the Standby Host

    When I took the cold backup of the Oracle database, I used the rcp command to transfer all of the database files to the standby host. If you have not yet copied the database files to the standby host, do so now. Also transfer the standby controlfile and the text intitialization parameter created to the standby host.

    $ rcp control01.ctl vmlinux4:
    $ rcp initTESTDB.ora vmlinux4:
  9. Use Same File and Directory Naming Conventions

    When setting up the standby databse, use the same naming convention for all database files and directories. This is not, however, always possible. If you cannot use the same directory structure and/or the same file names on the standby database, you can use the procedures in this section for creating a mapping of the primary database and standby database directories and file names.

    This is required if the standby database is on the same machine as the primary. If the above init<SID>.ora parameters are used, you MUST use ALTER DATABASE RENAME FILE <oldname> TO <newname> when the standby database is activated.

    The two initialization parameters that are used to create the mappings o directory structure and file names between the primary and standby database are the following:

    • db_file_name_convert - Specify the location of the primary database datafiles followed by the standby location of the datafiles. This parameter will convert the filename of the primary database datafiles to the filename of the standby datafile filenames. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site then this parameter is required.
    • log_file_name_convert - Specify the location of the primary database logs followed by the standby location of the logs. This parameter will convert the filename of the primary database log to the filenames of the standby log. If the standby database is on the same system as the primary database or if the directory structure where the logs are located on the standby site is different from the primary site then this parameter is required.
  10. Configuring the Oracle Networking Files

    Configure the Oracle Networking Components that allow the primary database to communicate to the standby database. On the primary host create a net service name that the primary database can use to connect to the standby database.

    On the primary host create a net service name that the standby, when running on the primary host, can use to connect to the primary, when it is running on the standby host.

    You should first put the following two entries in the tnsnames.ora file for the primary host. Keep in mind that the entries below will appear in both the primary and standby hosts:

    TNS Names Entries

    TESTDB_VMLINUX3.IDEVELOPMENT.INFO =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS =
    (PROTOCOL = TCP)
    (HOST = vmlinux3.idevelopment.info)
    (PORT = 1521)
    )
    )
    (CONNECT_DATA =
    (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO)
    )
    )

    TESTDB_VMLINUX4.IDEVELOPMENT.INFO =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS =
    (PROTOCOL = TCP)
    (HOST = vmlinux4.idevelopment.info)
    (PORT = 1521)
    )
    )
    (CONNECT_DATA =
    (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO)
    )
    )

    Now, you will need to setup the listener.ora file on the primary host:

    Listener.ora File on the Primary Host

    LISTENER =
    (DESCRIPTION_LIST =

    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux3.idevelopment.info)(PORT = 1521))
    )
    (PROTOCOL_STACK =
    (PRESENTATION = TTC)
    (SESSION = NS)
    )
    )

    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
    )
    (PROTOCOL_STACK =
    (PRESENTATION = TTC)
    (SESSION = NS)
    )
    )

    )

    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /u01/app/oracle/product/9.2.0)
    (PROGRAM = extproc)
    (ENVS="EXTPROC_DLLS=ANY")
    )
    (SID_DESC =
    (GLOBAL_DBNAME = TESTDB.IDEVELOPMENT.INFO)
    (SID_NAME = TESTDB)
    (ORACLE_HOME = /u01/app/oracle/product/9.2.0)
    )
    )

    Once the Oracle networking files are configured, ensure to start the TNS listener with the latest (valid) listener.ora file:

    $ lsnrctl stop
    $ lsnrctl start

    Finally, you should set the dead connection detection in the sqlnet.ora file. Keep in mind that this is only a requirement for the sqlnet.ora file on the standby database server. However, I like to have it already configured in the sqlnet.ora file for the primary database server if a switchover needs to occur. I then do not have to remember to set it if the primary were to transition to the role of standby. Here is an example sqlnet.ora file:

    Sqlnet.ora File on the Primary Host

    NAMES.DIRECTORY_PATH= (TNSNAMES, LDAP)
    BEQUEATH_DETACH=yes
    AUTOMATIC_IPC = ON
    NAMES.DEFAULT_DOMAIN = IDEVELOPMENT.INFO
    SQLNET.EXPIRE_TIME=2



Configuring the Standby Database

 Let's continue by configuring and preparing our standby database.

  • Configuring the Oracle Networking Files

    Configure the Oracle Networking Components for the standby database. You should first put the following two entries in the tnsnames.ora file for the standby host.Keep in mind that the entries below will appear in both the primary and standby hosts:

    TNS Names Entries

    TESTDB_VMLINUX3.IDEVELOPMENT.INFO =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS =
    (PROTOCOL = TCP)
    (HOST = vmlinux3.idevelopment.info)
    (PORT = 1521)
    )
    )
    (CONNECT_DATA =
    (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO)
    )
    )

    TESTDB_VMLINUX4.IDEVELOPMENT.INFO =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS =
    (PROTOCOL = TCP)
    (HOST = vmlinux4.idevelopment.info)
    (PORT = 1521)
    )
    )
    (CONNECT_DATA =
    (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO)
    )
    )

    Now, you will need to setup the listener.ora file for the standby host. The listener.ora file can be copied from the primary database server with only one change that need to be performed. I have marked the changes that will need to be made in RED.

    Listener.ora File on the Standby Host

    LISTENER =
    (DESCRIPTION_LIST =

    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux4.idevelopment.info)(PORT = 1521))
    )
    (PROTOCOL_STACK =
    (PRESENTATION = TTC)
    (SESSION = NS)
    )
    )

    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
    )
    (PROTOCOL_STACK =
    (PRESENTATION = TTC)
    (SESSION = NS)
    )
    )

    )

    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /u01/app/oracle/product/9.2.0)
    (PROGRAM = extproc)
    (ENVS="EXTPROC_DLLS=ANY")
    )
    (SID_DESC =
    (GLOBAL_DBNAME = TESTDB.IDEVELOPMENT.INFO)
    (SID_NAME = TESTDB)
    (ORACLE_HOME = /u01/app/oracle/product/9.2.0)
    )
    )

    Finally, you should set the dead connection detection in the sqlnet.ora file. Here is an example sqlnet.ora file:

    Sqlnet.ora File on the Standby Host

    NAMES.DIRECTORY_PATH= (TNSNAMES, LDAP)
    BEQUEATH_DETACH=yes
    AUTOMATIC_IPC = ON
    NAMES.DEFAULT_DOMAIN = IDEVELOPMENT.INFO
    SQLNET.EXPIRE_TIME=2

    Once the Oracle networking files are configured, ensure to start the TNS listener with the latest (valid) listener.ora file:

    $ lsnrctl stop
    $ lsnrctl start
  • Create a Server Parameter File for the Standby Database

    Remember that text initialization parameter file we created and transfered from the primary database? Well, we need to make modifications to it and create an SPFILE for the standby database. Let's start by putting the text initialization parameter file in the $ORACLE_HOME/dbs directory:

    $ cp initTESTDB.ora $ORACLE_HOME/dbs
    $ cd $ORACLE_HOME/dbs
    Now, let's change to that directory and make the appropriate modifications to the text initialization file. Below, I provide an overview of the parameters that are important for a successful standby database configuration for the standby database.

    Ealier in this article, I mentioned that in a robust Data Guard configuration, the initialization parameter for the primary and standby database should be identical with the exception of four key paramaters:

    FAL_CLIENT
    FAL_SERVER
    LOG_ARCHIVE_DEST_2
    LOG_ARCHIVE_DEST_STATE_2
    I marked the changes that should be made the the standby initialization file in RED:
    ...
    archive_lag_target = 0
    compatible = '9.2.0.6.0'
    control_file_record_keep_time = 7
    control_files = '/u03/app/oradata/TESTDB/control01.ctl',
    '/u04/app/oradata/TESTDB/control02.ctl',
    '/u05/app/oradata/TESTDB/control03.ctl'
    db_name = 'TESTDB'
    dg_broker_start = true
    fal_client = 'TESTDB_VMLINUX4'
    fal_server = 'TESTDB_VMLINUX3'

    instance_name = 'TESTDB'
    local_listener = ''
    log_archive_dest_1 = 'location=/orabackup/archive/TESTDB mandatory'
    log_archive_dest_2 = 'service=TESTDB_VMLINUX3 optional reopen=15'
    log_archive_dest_state_1 = 'enable'
    log_archive_dest_state_2 = 'defer'
    log_archive_format = 'arch_t%t_s%s.dbf'
    log_archive_start = true
    remote_archive_enable = true
    remote_login_passwordfile = exclusive
    service_names = 'TESTDB.IDEVELOPMENT.INFO'
    standby_archive_dest = '/orabackup/archive/TESTDB'
    standby_file_management = 'auto'
    ...
    Finally, on the idle standby database, use the SQL CREATE statement to create a server parameter file for the standby database from the text initialization parameter file that was just edited:
    $ sqlplus "/ as sysdba"

    Connected to an idle instance.

    SQL> create spfile from pfile='?/dbs/initTESTDB.ora';

    File created.

  • Copy the Standby Controlfile to the Appropriate Location

    Remember that standby controlfile we created on the primary database server and transfered to the standby host? Well, we need to copy that file to the appropriate locations (and name them correctly) for the standby database as indentified in your init<SID>.ora file for the standby database. For my example, I have the control_files defined as follows:

    control_files = '/u03/app/oradata/TESTDB/control01.ctl',
    '/u04/app/oradata/TESTDB/control02.ctl',
    '/u05/app/oradata/TESTDB/control03.ctl'
    $ cd
    $ cp control01.ctl /u03/app/oradata/TESTDB/control01.ctl
    $ cp control01.ctl /u04/app/oradata/TESTDB/control02.ctl
    $ cp control01.ctl /u05/app/oradata/TESTDB/control03.ctl

  • Create Oracle Password File

    Create an Oracle Password File for the standby database:

    $ cd $ORACLE_HOME/dbs
    $ orapwd file=orapwTESTDB password=change_on_install

    Keep in mind that the Oracle Password File is required only if you will be using Data Guard Broker. It is also a requirement that the password for SYS be the same on both the primary and standby database are the same.


  • Start the Physical Standby Database

    On the standby database, issue the following SQL statements to start and mount the database in standby mode:

    SQL> startup nomount;
    ORACLE instance started.

    Total System Global Area 252777712 bytes
    Fixed Size 451824 bytes
    Variable Size 218103808 bytes
    Database Buffers 33554432 bytes
    Redo Buffers 667648 bytes

    SQL> alter database mount standby database;


  • Begin Shipping Redo Data to the Standby Database

    Back when we were configuring the primary database server, we deferred the LOG_ARCHIVE_DEST_2. This is the destination defined that will transfer redo data to the standby database we have defined in our configuration. Now that the standby database is mounted, we should enable this process on the primary database:

    SQL> alter system set log_archive_dest_state_2=enable scope=both;


  • Initiate Log Apply Services

    On the standby database, start log apply services as follows:

    SQL> alter database recover managed standby database disconnect from session;


    The example includes the DISCONNECT FROM SESSION option so that log apply services run in a background session. This option is new in Oracle9i.


  • Start Remote Archiving

    With the protection mode we are using, Maximum Performance, archiving of redo logs to the remote standby location do not occur until after a log switch. A log switch occurs, by default, when an online redo log becomes full.

    To force the current redo logs to be archived immediately, use the SQL ALTER SYSTEM statement on the primary database. For example:

    SQL> alter system archive log current;



    Verifying the Physical Standby Database

    Now that we have a physical standby database and set up log transport services, we should now verify that database modifications are being successfully shipped from the primary database to the standby database.

    To see the new archived redo logs that were received on the standby database, you should first identify the existing archived redo logs on the standby database, archive a few logs on the primary database, and then check the standby database again.

    The following steps show how to perform these tasks.

    1. Identify the Existing Archived Redo Logs

      On the standby database, query the V$ARCHIVED_LOG view to identify existing archived redo logs. For example:

      SQL> alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';

      Session altered.

      SQL> select sequence#, first_time, next_time
      from v$archived_log order by sequence#;

      SEQUENCE# FIRST_TIME NEXT_TIME
      ---------- -------------------- --------------------
      115 11-OCT-2004 22:43:29 12-OCT-2004 18:51:38


    2. Archiving the Current Log

      On the primary database, archive the current log using the following SQL statement:

      SQL> alter system archive log current;

      System altered.


    3. Verify That the New Archived Redo Log Was Received

      From the standby database, query the V$ARCHIVED_LOG view again to verify the redo log was received:

      SQL> select sequence#, first_time, next_time
      from v$archived_log order by sequence#;

      SEQUENCE# FIRST_TIME NEXT_TIME
      ---------- -------------------- --------------------
      115 11-OCT-2004 22:43:29 12-OCT-2004 18:51:38
      116 12-OCT-2004 18:51:38 12-OCT-2004 18:53:43
      The logs are now available for the log apply services to apply redo data to the standby database. We check this in the next step.


    4. Verify that the new archived redo log was applied.

      From the standby database, query the V$ARCHIVED_LOG view to verify the archived redo log was applied.

      SQL> select sequence#, archived, applied
      from v$archived_log order by sequence#;

      SEQUENCE# ARCHIVED APPLIED
      ---------- -------- -------
      115 YES YES
      116 YES YES


    5. Monitoring the Alert.log of the Standby Database

      Querying the V$ARCHIVED_LOG is a good way to ensure the log transport service and the log apply service is doing their job. I also like to tail the alert.log file of the standby database to witness the Media Recovery services being applied.

      From the standby database, perform the following tail -f against the alert.log while you are issueing the "alter system archive log current;" statement on the primary database":

      $ cd ~oracle/admin/ORA920/bdump

      $ tail -f alert_ORA920.log
      Tue Oct 12 18:41:53 2004
      Completed: alter database recover managed standby database di
      Tue Oct 12 18:41:53 2004
      Media Recovery Waiting for thread 1 seq# 115
      Tue Oct 12 18:42:53 2004
      Restarting dead background process QMN0
      QMN0 started with pid=9
      Tue Oct 12 18:45:24 2004
      Media Recovery Log /u06/app/oradata/ORA920/archive/arch_t1_s115.dbf
      Media Recovery Waiting for thread 1 seq# 116
      Media Recovery Log /u06/app/oradata/ORA920/archive/arch_t1_s116.dbf
      Media Recovery Waiting for thread 1 seq# 117

    Database Maintenance

    1.    Cancel/Stop Managed Standby Recovery

    While connected to the standby database

    1.    ALTER DATABASE SET STANDBY DATABASE UNPROTECTED;

    2.    RECOVER MANAGED STANDBY DATABASE CANCEL;

    3.    ALTER DATABASE OPEN READ ONLY;

    The database can subsequently be switched back to recovery mode as follows:

    Start-up managed recovery on standby database

    1.    CONNECT / AS SYSDBA

    2.    SHUTDOWN IMMEDIATE

    3.    STARTUP NOMOUNT

    4.    ALTER DATABASE MOUNT STANDBY DATABASE;

    5.    RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

    6.    ALTER DATABASE SET STANDBY DATABASE PROTECTED;

     

    2.    Activating a Standby Database

    If the primary database is not available the standby database can be activated as a primary database using the following statements:

    Cancel recovery if necessary on standby database

    1.    RECOVER MANAGED STANDBY DATABASE CANCEL;

    2.    ALTER DATABASE ACTIVATE STANDBY DATABASE;

    Since the standby database is now the primary database it should be backed up immediately. The previous primary database can then be configured as a standby.

    1.    Backup Standby Database

    Backups of the standby database can only be performed if the database is shut down or in read only mode. Read only mode is best for managed recovery systems, as archive logs will still be transferred during the backup process, thus preventing gap sequences. Once the server is in the desired mode simply copy the appropriate database files.

    3.    Database Switchover

    A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. This process is known as a Switchover and can be performed using the following statements:

    While connected to the primary database, issue the following commands:-

    1.       CONNECT / AS SYSDBA

    2.       ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

    3.       SHUTDOWN IMMEDIATE;

    4.       STARTUP NOMOUNT

    5.       ALTER DATABASE MOUNT STANDBY DATABASE;

    6.       RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

    Now the original Primary database is in Standby mode and waiting for the new Primary database to activate, which is done while connected to the standby database (not the original primary)

    1.       CONNECT / AS SYSDBA

    2.       ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

    3.       SHUTDOWN IMMEDIATE;

    4.       STARTUP

    This process has no affect on alternative standby locations. The process of converting the instances back to their original roles is known as a Switchback. The switchback is accomplished by performing another switchover.

    4.    Database Fail over

    Graceful Database Fail over occurs when database fail over causes a standby database to be converted to a primary database:

    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

    ALTER DATABASE ACTIVATE STANDBY DATABASE;

    This process will recovery all or some of the application data using the standby redo logs, therefore avoiding reinstantiation of other standby databases. If completed successfully, only the primary database will need to be reinstatiated as a standby database.

    Forced Database Failover changes one standby database to a primary database. Application data may be lost necessitating the reinstantiation of the primary and all standby databases.

    5.    Automatic Archive Gap Detection

    Gaps in the sequence of archive logs can be created when changes are applied to the primary database while the standby database is unavailable. Most of these gap sequences can be resolved automatically. The following parameters must be added to the standby init.ora file where the values indicate net services names.

    FAL_SERVER = 'prod1'

    FAL_CLIENT = 'stby1'

    The FAL server is normally the primary database, but can be another standby database. Once the standby database is placed in managed recovery mode it will automatically check for gap sequences. If it finds any it will request the appropriate files from the primary database via the FAL server. If the gap sequences cannot be resolved the files have to be recovered manually. This can be done by retrieving missing archive files to the archive directory on the Primary server.

    6.    Delayed Redo Application

    Application of the archived redo logs to the standby database can be delayed using the DELAY keyword. If a rogue statement significantly damages the primary database the DBA can choose to switch to the standby database, which will be in the correct state prior to the problem on the primary database.

    On the primary database, issue the following commands: -

    ·               Delay application of archived redo logs by 30 minutes.

    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30;

    ·               Return to no delay (Default).

    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;