Saturday, April 25, 2015

A brief overview of Dataguard in 11g

Dataguard

 In a Data Guard configuration, a production database is referred to as a primary database. A standby database is a synchronized copy of the primary database. Using a backup copy of the primary database, you can create from one to 30 standby databases. The standby databases, togetherwith the primary database, make up a Data Guard configuration.


-- Using a backup copy of the primary database, We can create from one to 30 standby       databases

 Snapshot Standby Database
  
 A snapshot standby database is a database that is created by converting a physical standby database into a snapshot standby database. The snapshot standby database receives redo from the primary database, but does not apply the redo data until it is converted back into a physical standby database. The snapshot standby database can be used for updates, but those updates are discarded before the snapshot standby database is converted back into a physical standby database. The snapshot standby database is appropriate when We require a temporary, updatable version of a physical standby database.
 
 Services

 1) Redo transport
 2) Apply services
a) Redo Apply
b) SQL Apply
 3) Role management services
 Transitions
  
 1) Switch Over
 2) Failover

Processes
 
 Primary Database
 
 1) LGWR - Collect transactions information and updates online redo logs. 
SYNC - Online Redo logs (not archive logs) are directly passed to LNS (Log Writer network server) process,which ships the redo to RFS(Remote file server) on the standby. LGWR wait for confirmation from LNS before ack commit. 
ASYNC - independent LNS processes read the redo from either the redo log buffer in memory or the online redo log file, and then ship the redo to its standby database.
Other than starting the asynchronous LNS processes, LGWR has no interaction with any asynchronous standby destination.
  2) Archiver (ARCn)
The ARCn process creates a copy of the online redo log files locally for use in a primary database recovery operation. 
ARCn is also responsible for shipping redo data to an RFS process at a standby database and for proactively detecting and resolving gaps on all standby databases. 
For Oracle Database 11g Release 2 (11.2), there can now be 30 archiver processes. The default value is four.
 Standby Database
 
 1) RFS (Remote File Server) - 
RFS receives redo information from the primary database and can write the redo into standby redo logs or directly to archived redo logs
Each LNSn and ARCn process from the primary database has its own RFS process.
 2) Archiver 
 
 3) MRP (Managed Recovery)
MRP is responsible for actual recovery. It applies the archive logs to the physical standby database
 4) LSP (Logical standby process)
 -- The limit of 30 standby databases is imposed by the LOG_ARCHIVE_DEST_n parameter
 
 -- We can use the Cascaded Redo Log Destinations feature to incorporate more than 30     
     standby databases in Wer configuration.
 
 -- Gap resoultion is automatic and is done with the help of archiver process. 
 
 Data Protection Modes
 
 1) Maximum Protection
-- This protection mode guarantees that no data loss occurs if the primary database fails. 
-- REDO data should be written to both local online redo log and standby redo log and then only commit will happen.
-- To ensure no data is lost, primary is shutdown if log is not written remotely to the standby redo log.
 2) Maximum Availability
 
-- This protection mode provides the highest possible level of data protection without compromising the availability of the primary database.
-- Its same as maximum protection, only the primary wont be shutdown if the redo data is not written to both online redo log and standby redo log. 
-- When all the gaps are resolved and the primary database is synchronized with the standby database, the primary database automatically resumes operating in maximum availability mode.
 
 3) Maximum Performance
-- The default protection mode provides the highest possible level of data protection without affecting the performance of the primary database
-- Primary DB will always be available, irrespective of standby synchornization. 
-- REDO data is written asynchronously to all the standby database.

Creating Phsical Standby Database

-- Force Logging
-- FORCE LOGGING forces redo to be generated even when NOLOGGING operations are executed.
-- Temporary tablespaces and temporary segments are not logged.
-- A standby redo log is required to implement:
• Synchronous transport mode
• Real-time apply
• Cascaded redo log destinations
• Role Changes.
SQL > SELECT group#, type, member FROM v$logfile
 where type='STANDBY';
 
SQL > SELECT group#, dbid, thread#, sequence#, status
 FROM v$standby_log;
 
-- Setting Initlization Parameters on the primary Database to control redo transport.

Mandatory
1) LOG_ARCHIVE_CONFIG
Syntax
LOG_ARCHIVE_CONFIG = {
[ SEND | NOSEND ][ RECEIVE | NORECEIVE ]
[ DG_CONFIG=(remote_db_unique_name1
[, ... remote_db_unique_name9) | NODG_CONFIG ] }
2) LOG_ARCHIVE_DEST_n
Syntax
LOG_ARCHIVE_DEST_2= 'SERVICE=test_stby VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=test_stby' 
LOG_ARCHIVE_DEST_STATE_2=ENABLE
3) LOG_ARCHIVE_DEST_STATE_n
Syntax
service= streiprdb4.bizprd LGWR ASYNC net_timeout=20 reopen=30 db_unique_name=streiprdb4_bizprd valid_for=(all_logfiles,primary_role)

-- VALID_FOR (Role Based Destinations)
We supply two values for the VALID_FOR attribute: redo_log_type and database_role.
The redo_log_type keywords are:
• ONLINE_LOGFILE: This destination is used only when archiving online redo log files.
• STANDBY_LOGFILE: This destination is used only when archiving standby redo log files or receiving archive logs from another database.
• ALL_LOGFILES: This destination is used when archiving either online or standby redolog files.The database_role keywords are the following:
• PRIMARY_ROLE: This destination is used only when the database is in the primary database role.
• STANDBY_ROLE: This destination is used only when the database is in the standby (logical or physical) role.
• ALL_ROLES: This destination is used when the database is in either the primary or the standby (logical or physical) role.
 
Note - The VALID_FOR attribute is optional. However, Oracle recommends that you define a VALID_FOR attribute for each destination so that your 
      Data Guard configuration operates properly after a role transition.
  
Do not use the default value, VALID_FOR=(ALL LOGFILES, ALL_ROLES), for logical standby databases
  
-- SYNC/ASYNC AFFIRM/NOAFFIRM (Redo Transport Mode)
• SYNC - This is for maximum protection i.e. it will make sure that the transactions are written to atleast one standby and then 
will commit the transactions

• ASYNC - This is the default and maximum performance. No need for ACK from LNS to commit. 
• AFFIRM - Specifies that a redo transport destination acknowledges received redo data after writing it to the standby redo log
• NOAFFIRM - Specifies that a redo transport destination acknowledges received redo data before writing it to the standby redo log
 
4) FILE_NAME_CONVERT (If disk configuration is not the same)
DB_FILE_NAME_CONVERT
DB_FILE_NAME_CONVERT applies only to a physical standby database and duplicate RMAN script. 
If the standby database uses Oracle Managed Files (OMF), do not set the DB_FILE_NAME_CONVERT parameter.
There is a 255-character limit on this parameter.
LOG_FILE_NAME_CONVERT
LOG_FILE_NAME_CONVERT applies only to a physical standby database and duplicate RMAN script. 
STANDBY_FILE_MANAGEMENT
STANDBY_FILE_MANAGEMENT applies to physical standby databases only, but can be set on a primary database for role changes.
5) FAL_CLIENT and FAL_SERVER
On physical standby databases, fetch archive log (FAL) provides a client/server mechanism for resolving gaps detected in the range of archived redo logs that are generated at the primary database and received at the standby database.
FAL_CLIENT - This is no longer required in 11g. 11g automatically detects it. 
FAL_SERVER - This initialization parameter specifies the Oracle Net service name for the standby database. 
 -- Primary Parameters Example
DB_NAME=test_prmy
DB_UNIQUE_NAME=test_prmy
LOG_ARCHIVE_CONFIG='DG_CONFIG=(test_prmy,test_stby)'
CONTROL_FILES='/u01/app/oracle/oradata/test_prmy/control1.ctl',
'/u01/app/oracle/oradata/test_prmy/control2.ctl'
LOG_ARCHIVE_DEST_2='SERVICE=test_stby VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=test_stby'
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=test%t_%s_%r.arc
 -- Creating an Oracle Net Service Name for Your Physical Standby Database
 
 -- Creating a Listener Entry for Your Standby Database
 
 -- Copying Your Primary Database Password File to the Physical Standby Database Host
 
 -- Creating an Initialization Parameter File for the Physical Standby Database
 
1) Create a parameter file only with the db name when using RMAN duplicate command as RMAN will automatically create spfile.
DB_NAME=test_prmy
2) Creating Directories for the Physical Standby Database
-- audit trail directory
-- directory for the data files
 -- Starting the Physical Standby Database in nomount mode.
 
 -- RMAN duplicate script to create the physical standby database
 
vi  phys_standby
 
run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'test_prmy','test_stby'
set db_unique_name='test_stby'
set db_file_name_convert='/test_prmy/','/test_stby/'
set log_file_name_convert='/test_prmy/','/test_stby/'
set control_files=
'/u01/app/oracle/oradata/test_stby.ctl'
set log_archive_max_processes='5'
set fal_client='test_stby'
set fal_server='test_prmy'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(test_prmy,test_stby)'
set log_archive_dest_2='service=testprmy ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)
db_unique_name=test_prmy';
}


 RMAN> connect target sys/password
 RMAN> connect auxiliary sys/password@test_stby
 RMAN> @phys_standby

 Enabling Real-Time Apply
 
When we enable the optional real-time apply feature, log apply services apply the redo data from standby redo log files in real time.
log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster
switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins
Execute the following command on the standby database to start Redo Apply:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;