Thursday, September 10, 2009

Oracle RAC Questions

What is RAC?
RAC stands for Real Application cluster. It is a clustering solution from Oracle Corporation that ensures high availability of databases by providing instance failover, media failover features.

Mention the Oracle RAC software components:-

Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database.Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service) that enable cache fusion.
Oracle RAC instances are composed of following background processes:
ACMS—Atomic Controlfile to Memory Service (ACMS)
GTX0-j—Global Transaction Process
LMON—Global Enqueue Service Monitor
LMD—Global Enqueue Service Daemon
LMS—Global Cache Service Process
LCK0—Instance Enqueue Process
RMSn—Oracle RAC Management Processes (RMSn)
RSMN—Remote Slave Monitor

What is GRD?

GRD stands for Global Resource Directory. The GES and GCS maintains records of the statuses of each datafile and each cahed block using global resource directory.This process is referred to as cache fusion and helps in data integrity.

Details on Cache Fusion:-

Oracle RAC is composed of two or more instances. When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block,it is easy to get the block image from the insatnce which has the block in its SGA rather than reading from the disk. To enable inter instance communication Oracle RAC makes use of interconnects. The Global Enqueue Service(GES) monitors and Instance enqueue process manages the cahce fusion.

Details on ACMS:-
ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environment ACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates are globally committed on success or globally aborted in event of a failure.

Details on GTX0-j :-
The process provides transparent support for XA global transactions in a RAC environment.The database autotunes the number of these processes based on the workload of XA global transactions.

Details on LMON:-
This process monitors global enques and resources across the cluster and performs global enqueue recovery operations.This is called as Global Enqueue Service Monitor.

Details on LMD:-
This process is called as global enqueue service daemon. This process manages incoming remote resource requests within each instance.

Details on LMS:-
This process is called as Global Cache service process.This process maintains statuses of datafiles and each cahed block by recording information in a Global Resource Dectory(GRD).This process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances.This processing is a part of cache fusion feature.

Details on LCK0:-
This process is called as Instance enqueue process.This process manages non-cache fusion resource requests such as libry and row cache requests.Give details on RMSn:-This process is called as Oracle RAC management process.These pocesses perform managability tasks for Oracle RAC.Tasks include creation of resources related Oracle RAC when new instances are added to the cluster.

Details on RSMN:-
This process is called as Remote Slave Monitor.This process manages background slave process creation andd communication on remote instances. This is a background slave process.This process performs tasks on behalf of a co-ordinating process running in another instance.

What components in RAC must reside in shared storage?
All datafiles, controlfiles, SPFIles, redo log files must reside on cluster-aware shared storage.

What is the significance of using cluster-aware shared storage in an Oracle RAC environment?All instances of an Oracle RAC can access all the datafiles,control files, SPFILE's, redolog files when these files are hosted out of cluster-aware shared storage which are group of shared disks.

Few examples for solutions that support cluster storage:-
ASM(automatic storage management),raw disk devices,network file system(NFS), OCFS2 and OCFS(Oracle Cluster Fie systems).

What is an interconnect network?
An interconnect network is a private network that connects all of the servers in a cluster. The interconnect network uses a switch/multiple switches that only the nodes in the cluster can access.

How can we configure the cluster interconnect?
Configure User Datagram Protocol(UDP) on Gigabit ethernet for cluster interconnect.On unia and linux systems we use UDP and RDS(Reliable data socket) protocols to be used by Oracle Clusterware.Windows clusters use the TCP protocol.

Can we use crossover cables with Oracle Clusterware interconnects?
No, crossover cables are not supported with Oracle Clusterware intercnects.

What is the use of cluster interconnect?
Cluster interconnect is used by the Cache fusion for inter instance communication.

How do users connect to database in an Oracle RAC environment?
Users can access a RAC database using a client/server configuration or through one or more middle tiers ,with or without connection pooling.Users can use oracle services feature to connect to database.

What is the use of a service in Oracle RAC environemnt?
Applications should use the services feature to connect to the Oracle database.Services enable us to define rules and characteristics to control how users and applications connect to database instances.

What are the characteriscs controlled by Oracle services feature?
The charateristics include a unique name, workload balancing and failover options,and high availability characteristics.

Which enable the load balancing of applications in RAC?
Oracle Net Services enable the load balancing of application connections across all of the instances in an Oracle RAC database.

What is a virtual IP address or VIP?
A virtl IP address or VIP is an alternate IP address that the client connectins use instead of the standard public IP address. To configureVIP address, we need to reserve a spare IP address for each node, and the IP addresses must use the same subnet as the public network.

What is the use of VIP?
If a node fails, then the node's VIP address fails over to another node on which the VIP address can accept TCP connections but it cannot accept Oracle connections.

Situations under which VIP address failover happens:-
VIP addresses failover happens when the node on which the VIP address runs fails, all interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the network.

What is the significance of VIP address failover?
When a VIP address failover happens, Clients that attempt to connect to the VIP address receive a rapid connection refused error .They don't have to wait for TCP connection timeout messages.

What are the administrative tools used for Oracle RAC environments?
Oracle RAC cluster can be administered as a single image using OEM(Enterprise Manager),SQL*PLUS,Servercontrol(SRVCTL),clusterverificationutility(cvu),DBCA,NETCA

How do we verify that RAC instances are running?
Issue the following query from any one node connecting through SQL*PLUS.
$connect sys/sys as sysdba
SQL>select * from V$ACTIVE_INSTANCES;
The query gives the instance number under INST_NUMBER column,host_:instancename under INST_NAME column.

What is FAN?
Fast application Notification as it abbreviates to FAN relates to the events related to instances,services and nodes.This is a notification mechanism that Oracle RAc uses to notify other processes about the configuration and service level information that includes service status changes such as,UP or DOWN events.Applications can respond to FAN events and take immediate action.

Where can we apply FAN UP and DOWN events?
FAN UP and FAN DOWN events can be applied to instances,services and nodes.

Use of FAN events in case of a cluster configuration change?
During times of cluster configuration changes,Oracle RAC high availability framework publishes a FAN event immediately when a state change occurs in the cluster.So applications can receive FAN events and react immediately.This prevents applications from polling database and detecting a problem after such a state change.

Why should we have seperate homes for ASm instance?
It is a good practice to have ASM home seperate from the database hom(ORACLE_HOME).This helps in upgrading and patching ASM and the Oracle database software independent of each other.Also,we can deinstall the Oracle database software independent of the ASM instance.

What is the advantage of using ASM?
Having ASM is the Oracle recommended storage option for RAC databases as the ASM maximizes performance by managing the storage configuration across the disks.ASM does this by distributing the database file across all of the available storage within our cluster database environment.

What is rolling upgrade?
It is a new ASM feature from Database 11g.ASM instances in Oracle database 11g release(from 11.1) can be upgraded or patched using rolling upgrade feature. This enables us to patch or upgrade ASM nodes in a clustered environment without affecting database availability.During a rolling upgrade we can maintain a functional cluster while one or more of the nodes in the cluster are running in different software versions.

Can rolling upgrade be used to upgrade from 10g to 11g database?
No,it can be used only for Oracle database 11g releases(from 11.1).

Initialization parameters that must have same value for every instance in an Oracle RAC database:-
Some initialization parameters are critical at the database creation time and must have same values.Their value must be specified in SPFILE or PFILE for every instance.The list of parameters that must be identical on every instance are given below:ACTIVE_INSTANCE_COUNT
ARCHIVE_LAG_TARGET
COMPATIBLE
CLUSTER_DATABASE
CLUSTER_DATABASE_INSTANCE
CONTROL_FILES
DB_BLOCK_SIZE
DB_DOMAIN
DB_FILES
DB_NAME
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
DB_UNIQUE_NAME
INSTANCE_TYPE (RDBMS or ASM)PARALLEL_MAX_SERVERS
REMOTE_LOGIN_PASSWORD_FILE
UNDO_MANAGEMENT

Can the DML_LOCKS and RESULT_CACHE_MAX_SIZE be identical on all instances?
These parameters can be identical on all instances only if these parameter values are set to zero.

What two parameters must be set at the time of starting up an ASM instance in a RAC environment?
The parameters CLUSTER_DATABASE and INSTANCE_TYPE must be set.

Mention the components of Oracle clusterware:-
Oracle clusterware is made up of components like voting disk and Oracle Cluster Registry(OCR).

What is a CRS resource?
Oracle clusterware is used to manage high-availability operations in a cluster.Anything that Oracle Clusterware manages is known as a CRS resource.Some examples of CRS resources are database,an instance,a service,a listener,a VIP address,an application process etc.

What is the use of OCR?
Oracle clusterware manages CRS resources based on the configuration information of CRS resources stored in OCR(Oracle Cluster Registry).

How does a Oracle Clusterware manage CRS resources?
Oracle clusterware manages CRS resources based on the configuration information of CRS resources stored in OCR(Oracle Cluster Registry).

Some Oracle clusterware tools and their uses?
OIFCFG - allocating and deallocating network interfaces
OCRCONFIG - Command-line tool for managing Oracle Cluster Registry
OCRDUMP - Identify the interconnect being used
CVU - Cluster verification utility to get status of CRS resources

What are the modes of deleting instances from ORacle Real Application cluster Databases?
We can delete instances using silent mode or interactive mode using DBCA(Database Configuration Assistant).

How do we remove ASM from a Oracle RAC environment?
We need to stop and delete the instance in the node first in interactive or silent mode.After that asm can be removed using srvctl tool as follows:
srvctl stop asm -n node_namesrvctl remove asm -n node_name
We can verify if ASM has been removed by issuing the following command:
srvctl config asm -n node_name

How do we verify that an instance has been removed from OCR after deleting an instance?
Issue the following srvctl command:
srvctl config database -d database_namecd CRS_HOME/bin./crs_stat

How do we verify an existing current backup of OCR?
We can verify the current backup of OCR using the following command :
ocrconfig -showbackup

What are the performance views in an Oracle RAC environment?
We have v$ views that are instance specific. In addition we have GV$ views called as global views that has an INST_ID column of numeric data type.GV$ views obtain information from individual V$ views.

What are the types of connection load-balancing?
There are two types of connection load-balancing:server-side load balancing and client-side load balancing.

What is the differnece between server-side and client-side connection load balancing?
Client-side balancing happens at client side where load balancing is done using listener.In case of server-side load balancing listener uses a load-balancing advisory to redirect connections to the instance providing best service.

Give the usage of srvctl:-
srvctl start instance -d db_name -i "inst_name_list" [-o start_options]
srvctl stop instance -d name -i "inst_name_list" [-o stop_options]
srvctl stop instance -d orcl -i "orcl3,orcl4" -o immediate
srvctl start database -d name [-o start_options]
srvctl stop database -d name [-o stop_options]
srvctl start database -d orcl -o mount

Tuesday, May 5, 2009

Configure Parallel Concurrent Processing R12

Configure Parallel Concurrent Processing



Set Up PCP



1. Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.

2. Source the application environment.

3. Check the configuration files tnsnames.ora and listener.ora located at $INST_TOP/ora/10.1.2/network/admin. Ensure that you have information of all the other concurrent nodes for FNDSM and FNDFS entries.

4. Restart the application listener process on each application tier node.

5. Log on to Oracle E-Business Suite Release 12 as SYSADMIN, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen and ensure that each node in the cluster is registered.


6. Verify whether the Internal Monitor for each node is defined properly with correct primary and secondary node specification and work shift details. Also ensure the Internal Monitor manager is activated by going to Concurrent -> Manager -> Administrator and activating the manager. (For example, Internal Monitor: Host2 must have primary node as host2 and secondary node as host3).

The secondary nodes for Internal Monitor: host2 and Internal Monitor: host3 were not defined.
Also the workshifts were not present. Set them as 24 hrs and the processes was set to 1.

Went to the concurrent - > manager - > administrator screen and activated the above 2 managers.


7. Set the $APPLCSF environment variable on all the CP nodes pointing to a log directory on a shared file system.

Change to be made in the context file. Initially it was :

$ORACLE_SID $> grep -i applcsf $CONTEXT_FILE
/ORACLE_HOME/inst/apps/SID_HOST/logs/appl/conc

Changed it to :

$ORACLE_SID $ > grep -i applcsf $CONTEXT_FILE
/ORACLE_HOME/inst/apps/$ORACLE_SID /logs/appl/conc

Do not run autoconfig yet, you have to make more changes in the below step. We will run autoconfig after that.

These changes need to be on just the conc manager tiers.


8. Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. This value should be pointing to a directory on a shared file system.

Initially, APPLTMP was set as ‘/usr/tmp’.

We need this to be on a file system, that is shared across all the CM nodes and is also writable by the owner of the oracle db.

Now we will need this to be the first entry in the “utl_file_dir” parameter in the db.

Use grid to have this change done, to the “utl_file_dir” parameter for database. This would need a bounce of db.

Also, modify the CONTEXT FILE on all the apps tier , to have APPLPTMP and APPLTMP point to shared mount point and run autoconfig.

$ORACLE_SID $> grep -i s_applptmp $CONTEXT_FILE
/usr/tmp

$ORACLE_SID $> grep -i s_appltmp $CONTEXT_FILE
/ORACLE_HOME/inst/apps/$ORACLE_SID /appltmp

Both of the above were changed to

$ORACLE_SID $> grep -i s_applptmp $CONTEXT_FILE
/ORACLE_HOME/inst/apps/temp

$ORACLE_SID $ > grep -i s_appltmp $CONTEXT_FILE
/ORACLE_HOME/inst/apps/temp


These changes need to be on just the conc manager tiers.

9. Set profile option ' Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required. By setting it to 'ON', Concurrent Managers will fail over to a secondary Application tier node if the database instance they are connected to fails.




Set Up Transaction Managers



1. Shut down the application services (servers) on all nodes.

2. In the Oracle RAC environment, shut down all the database instances cleanly with the command:
SQL>shutdown immediate;

3. Edit $ORACLE_HOME/dbs/_ifile.ora. Add the following parameters:

_lm_global_posts=TRUE
_immediate_commit_propagation=TRUE

4. Start the instances on all database nodes, one by one.

5. Start up the application services (servers) on all nodes.

6. Log on to Oracle E-Business Suite Release 12 using SYSADMIN login and System Administrator Responsibility.
Navigate to Profile > System and change the profile option ‘Concurrent: TM Transport Type' to ‘QUEUE' and verify the transaction manager works across the RAC instance.

7. Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers.

8. Restart the concurrent managers.

9. If any of the transaction managers have a deactivated status, activate them using Concurrent > Manager > Administrator.




Set Up Load Balancing on Concurrent Processing Nodes



1. Edit the applications context file through Oracle Applications Manager interface and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to load balancing alias (_balance>).


Change need to be made in CONTEXT_FILE.

Initially it was:
$ORACLE_SID $ > grep -i s_cp_twotask $CONTEXT_FILE
$ORACLE_SID


It was changed to :

$ORACLE_SID $> grep -i s_cp_twotask $CONTEXT_FILE
$ORACLE_SID _TWO TASK



These changes need to be on just the conc manager tiers ( host2 and host3).
2. Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.

Wednesday, January 21, 2009

This configuration has been successfully tested on Oracle 10g (Release 2). We will set up this configuration on the following machines.
Contents:

1 PRIMARY SITE: (Information)
2 STANDBY SITE: (Information)
3 CONFIGURATION ON PRIMARY
4 CONFIGURATION ON STANDBY
5 LOG SHIPPING
6 START PHYSICAL LOG APPLY SERVICE.
7 ROLE TRANSITION :
8 Manual Switchover:
9 DATAGUARD BROKER :
10 BROKER CONFIGURATION:
11 Switchover:
12 DATAGUARD OBSERVER :
13 OBSERVER CONFIGURATION:
14 OBSERVER SITE: (Information)
15 Step 1:
16 Step 2:
17 Step 3:
18 Step 4:
19 Step 5:
20 Data Guard With Real Time
21 Real Time Apply – Benefits
22 Note
23 Converting To Real time
24 Enable Real Time

PRIMARY SITE: (Information)Database name (db_name): ORCL
Database Unique Name (db_unique_name): primary
TNS Service Name: to_standby (Through this service, the primary machine will be connected to STANDBY machine)

STANDBY SITE: (Information)
Database name (db_name): ORCL
Database Unique Name (db_unique_name): standby
TNS Service Name: to_primary (Through this service, the standby machine will be connected to PRIMARY machine)

CONFIGURATION ON PRIMARY
Create pfile from spfile on the primary database:
SQL> Create pfile from spfile=’C:\oracle\product\10.2.0\db_1\dbs\spfilePrimary.ora’;


Then make/add following settings in the initPrimary.ora file on the PRIMARY Machine.

db_unique_name=’PRIMARY’FAL_Client=’to_primary’FAL_Server=’to_standby’Log_archive_config=’DG_CONFIG=(primary,standby)’Log_archive_dest_1=’Location=c:\oracle\backupVALID_FOR=(ALL_LOGFILES,ALL_ROLES)db_unique_name=primary’Log_archive_dest_2=’Service=to_standby lgwr asyncVALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)db_unique_name=standby’Log_archive_dest_state_1=ENABLE
Log_archive_dest_state_2=DEFER
Service_names=’primary’
Standby_File_Management=’AUTO’
Create password file using ‘cmd’.
C:\> orapwd file=C:\oracle\product\10.2.0\db_1\database\PWDOrcl.ora password=oracle entries=5 [force=y].
Force option is used to replace an existing password file.
Now startup the PRIMARY database to MOUNT stage.
• SQL> startup mount;

Make the following changes: Take the database to Archive Mode.

• SQL> Alter database ArchiveLog;

Enable Force Logging.

• SQL> Alter database Force Logging;

On the PRIMARY site, also create standby redo logfile for the Standby database. This standby redo logfile will be used for Dataguard Observer later on. If you don’t want to use DG broker (observer), then there is no need to create standby redo logfile.

• SQL>Alter database add standby logfile (‘C:\oracle\product\10.2.0oradata\ORCL\StandbyRedo.log’) size 150m;

Now shutdown the primary database.

• SQL> shutdown immediate;

Copy all the Datafiles and standby redo logfile from PRIMARY site to the same location on the STANDBY site. Then again startup the PRIMARY database to mount stage.
• SQL> startup mount;
Now create a standby controlfile on the PRIMARY site.•
SQL> Alter database create standby controlfile as ‘c:\oracle\backup\standcontrol.ctl’;

Now copy this new created standby control file to the standby site where other database file like datafiles, logfiles and control files are located. Rename this file to Control01.ctl, Control02.ctl and Control03.ctl.

Create spFile from pfile.

• SQL> Create spfile from pfile;

Restart the primary database.
Now on PRIMARY site create a service in TNSnames.ora file through which the PRIMARY site will be connected to the Standby machine.

TO_STANDBY = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.12.196)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = standby) ) )

Also check the connectivity from the SQL Prompt.

• SQL> connect sys/oracle@to_standby as sysdbaConnected.

Service can also be created through Net Manager utility available with Oracle Server. Connectivity can also be checked there. Register the Primary Database in the Listener.ora file.
Then stop and start the listener in the ‘cmd’.
> Lsnrctl stop> Lsnrctl start

Query the DATABASE_ROLE column from V$DATABASE to view the role of primary database. It should return ‘PRIMARY’.

CONFIGURATION ON STANDBY

Check the mode of Archiving by following command:
• SQL> Archive Log List

Then create pfile from spfile on the standby database:

SQL> Create pfile from spfile=’C:\oracle\product\10.2.0\db_1\dbs\spfileStandby.ora’;

Then make/add following settings in the initStandby.ora file on the STANDBY Machine.

db_unique_name=’STANDBY’
FAL_Client=’to_standby’
FAL_Server=’to_primary’
Log_archive_config=’DG_CONFIG=(primary,standby)’Log_archive_dest_1=’Location=c:\oracle\backupVALID_FOR=(ALL_LOGFILES,ALL_ROLES)
db_unique_name=standby’Log_archive_dest_2=’Service=to_primaryVALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
db_unique_name=primary’
Log_archive_dest_state_1=ENABLE
Log_archive_dest_state_2=ENABLE
Service_names=’STANDBY’
Standby_File_Management=’AUTO’
db_file_name_convert='/home/u001/primary/','/home/u001/standby/'log_file_name_convert='/home/u001/primary/','/home/u001/standby/'lock_name_space=standby
Create password file using ‘cmd’.
C:\> orapwd file=C:\oracle\product\10.2.0\db_1\database\PWDOrcl.ora password=oracle entries=5 [force=y].
Force option is used to replace an existing password file.

Now on STANDBY site create a service in TNSnames.ora file through which the STANDBY site will be connected to the PRIMARY machine.

TO_PRIMARY = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.12.146)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = primary) ) )

Check the connectivity from the SQL Prompt.
• SQL> connect sys/oracle@to_primary as sysdba Connected.

Service can also be created through Net Manager utility available with Oracle Server. Connectivity can also be checked there. Register the Standby Database in the Listener.ora file. Then stop and start the listener in the ‘cmd’.

> Lsnrctl stop> Lsnrctl start

Create spfile from pfile.
• SQL> Create spfile from pfile;
Restart the database. Now startup the STANDBY database to mount stage.
• SQL> startup mount
Enable Force Logging.
• SQL> Alter database Force Logging;

Query the column DATABASE_ROLE from V$DATABASE to view the role of standby database. It should return ‘PHYSICAL STANDBY’.

LOG SHIPPING

On PRIMARY site enable Log_archive_dest_state_2 to start shipping archived redo logs.
• SQL> Alter system set Log_archive_dest_state_2=ENABLE scope=both;
System Altered.

Check the sequence # and the archiving mode by executing following command.
• SQL> Archive Log List

Then switch the logfile.
• SQL> Alter system switch logfile;System Altered.

Now on the PRIMARY site check the status of Standby Archiving destination.
• SQL> Select Status, Error from v$Archive_dest where dest_id=2;

The STATUS should return – VALID. If it returns Error, then check the connectivity between the Primary and Standby machines.

START PHYSICAL LOG APPLY SERVICE.

On the STANDBY database execute the following command to start Managed Recovery Process (MRP). This command is executed on Mount stage.

• SQL> Alter Database Recover Managed Standby Database;
Database Altered.

By executing the above command the current session will become hanged because MRP is a foreground recovery process. It waits for the logs to come and apply them. To avoid this hanging, you can execute the following command with DISCONNECT option.

• SQL> Alter Database Recover Managed Standby Database Disconnect;
Database Altered.

Now the session will be available to you and MRP will work as a background process and apply the redo logs.

You can check whether the log is applied or not by querying V$ARCHIVED_LOG.
• SQL> Select Name, Applied, Archived from v$Archived_log;

This query will return the name of archived files and their status of being archived and applied.
ROLE TRANSITION :

In Dataguard configuration, two roles are defined; Primary and Standby. Primary database is the production database which is used by the users. For high availability purpose, the dataguard provides a standby database which remains available side by side to the primary database.
Standby databases can be more than one and can be at remote locations as well.
Oracle Data Guard supports two role transition operations:

Switchover:
Switchover allows the primary database to switch roles with one of its standby databases. There is no data loss during a switchover. After a switchover, each database continues to participate in the Data Guard configuration with its new role.

Failover:
Failover transitions a standby database to the primary role in response to a primary database failure. If the primary database was not operating in either maximum protection mode or maximum availability mode before the failure, some data loss may occur. After a failover, the failed database no longer participates in the Data Guard configuration. It needs to be reinstated to become an active part of Data Guard configuration.

Manual Switchover:

On the PRIMARY Database: (Open stage)

Query V$DATABASE to check the role of Primary Database.
• SQL> Select Database_rolefrom v$Database;
It will return “PRIMARY”,

Now check the Switchover Status of the Primary Database.

• SQL> Select switchover_statusfrom v$Database;

It will return “SESSIONS ACTIVE”.

Now you are ready to perform a manual switchover. Execute the following command using “WITH SESSION SHUTDOWN” option.

• SQL> Alter Database Commit to Switchover to Physical Standby with session Shutdown;
Database Altered.

Now your PRIMARY Database has become Physical Standby.
To verify this change, again query the Database_role column of V$DATABASE. Now it will return “PHYSICAL STANDBY”.

Shutdown the database and again Start it to mount stage.

On the PHYSICAL STANDBY Database: (Mount stage) Query V$DATABASE to check the role of Standby Database.

• SQL> Select Database_rolefrom v$Database;
It will return “PHYSICAL STANDBY”,

Now check the Switchover Status of the Standby Database.
• SQL> Select switchover_statusfrom v$Database;
It will return “SESSIONS ACTIVE”.

Now cancel the MRP which is running in the background of the Standby Database. Execute the following command:

• SQL> Alter database Recover Managed Standby Database Cancel;
Database Altered.

Now you are ready to perform a manual switchover from Physical Standby to Primary. Execute the following command using “WITH SESSION SHUTDOWN” option.

• SQL> Alter Database Commit to Switchover to PRIMARY with session Shutdown;
Database Altered.

Now your PHYSICAL STANDBY Database has become PRIMARY. To verify this change, again query the Database_role column of V$DATABASE. Now it will return “PRIMARY”.

Shutdown the database and again Start it to Open stage.

DATAGUARD BROKER :The Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. You can use either the Oracle Enterprise Manager graphical user interface (GUI) or command-line interface (CLI) to automate and simplify:

• Creating and enabling Data Guard configurations, including setting up log transport services and log apply services.
• Managing an entire Data Guard configuration from any system in the configuration.
• Managing and monitoring Data Guard configurations that contain Real Application Clusters primary or standby databases.

BROKER CONFIGURATION:
On both Primary and Standby sites, change the initialization parameter in the spfile to enable the Data guard broker.

•SQL> Alter system set dg_broker_start=True scope=both;
System Altered.

On the PRIMARY site, open the ‘cmd’ and start Command Line Interface (CLI) of the Dataguard Broker (DGMGRL).

C:\> dgmgrlDGMGRL for 32-bit Windows: Version 10.2.0.1.0 - ProductionCopyright (c) 2000, 2005, Oracle. All rights reserved.Welcome to DGMGRL, type "help" for information.DGMGRL>_

Now connect to the database through the service you made previously.

DGMGRL> connect sys/oracle@to_primaryConnected.
Create broker configuration.

DGMGRL> create configuration ‘broker1’ as
> primary database is ‘primary’
> connect identifier is to_primary;

(‘to_primary’ in Connect identifier is the service name through which the broker is connected to the PRIMARY database)

Add Standby Database to the above configuration.

DGMGRL> Add database ‘standby’ as
> connect identifier is to_standby
> maintained as physical;

(‘to_standby’ in Connect identifier is the service name through which the broker is connected to the STANDBY database)

Now the configuration has been set up but it is still disabled. You can view the configuration by executing:
DGMGRL> show configuration
Configuration
Name: broker1
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLE
Databases:
primary - Physical standby database
standby - Primary database
Current status for "broker1":
DISABLE

The next step is to ENABLE the configuration ‘broker1’.

DGMGRL> enable configuration;
Enabled
Again view the configuration.
DGMGRL> show configuration
Configuration
Name: broker1
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLE
Databases:
primary - Physical standby database
standby - Primary database
Current status for "broker1":
SUCCESS

Switchover:
Now we are ready to switch over the PRIMARY database Role to STANDBY database Role.


DGMGRL> switchover to ‘Standby’;
…..…..

Primary Database Successfully converted to Physical Standby. You can again switch over the Standby Database to Primary by executing following command.

DGMGRL> switchover to ‘Primary’;
…..…..

Standby Database is successfully converted to Primary Database.

Failover:
Failover can be done through the same configuration without any alteration. You simply need to execute following command:


DGMGRL> failover to ‘Standby’;
….
Failover to standby succeeded.
And also…

DGMGRL> failover to ‘Primary’;
….
Failover to primary succeeded.

DATAGUARD OBSERVER :
Observer is a utility that is available with the dataguard. Its basic purpose is to keep the database available to the clients all the time. It is started on a separate location other than Primary and Standby locations. After starting the observer, it starts watching continuously both the sites. Whenever the PRIMARY database fails due to any reason and the connection between the PRIMARY site and the observer breaks, the observer waits for a certain time which is set up during setting the broker configuration. When this time passes, the observer automatically starts the failover process. As a result, the Physical Standby database is converted to PRIMARY database. After this failover, the users again connect to the database. This connection redirects them automatically to the new PRIMARY database.

OBSERVER CONFIGURATION:
There are some important configuration settings that are needed to be set before starting the Dataguard (DG) Observer. Main thing is to set the STANDBY database to Maximum Availability protection mode. DG broker configuration must also be set to “MaxAvailability” protection mode. After this we need to enable “fast_start failover” parameter of the broker configuration. Without setting these options, broker will not allow to proceed for observer to work for remote failover. Following are the steps to enable the observer.


OBSERVER SITE:
OBSERVER Machine IP: 192.168.12.165 PRIMARY Machine IP: 192.168.12.146 STANDBY Machine IP: 192.168.12.196

TNS Service Name for PRIMARY site: to_primary (Through this service, the observer machine will be connected to PRIMARY machine)

TNS Service Name for STANDBY site: to_standby (Through this service, the observer machine will be connected to STANDBY machine)

(NOTE: Fresh DG broker configuration will be created for implementation of Observer. Therefore, remove any previously created configuration.)

Step 1:Create TNS services on the observer machine for PRIMARY site and STANDBY site through which the observer machine will be connected to the primary and standby databases.
TNS service name for PRIMARY site is ‘to_primary’. TNS service name for STANDBY site is ‘to_standby’.

Step 2:Before starting the broker CLI (DGMGRL) we have to enable FLASHBACK on both the Primary and Standby databases.
At mount stage execute:

• SQL> Alter database Flashback on;
Database altered.

Step 3:On the STANDBY site, edit Log_Archive_Dest_2 parameter and add three properties i.e. LGWR AFFIRM SYNC. These properties are needed to bring the broker protection mode to MaxAvailability.

• SQL> Alter System set Log_archive_dest_2=’Service=to_primaryLGWR AFFIRM SYNCVALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)db_unique_name=primary’;
System altered.

Step 4:
Now execute the following command on the primary database to change the protection mode from MAXIMUM PERFORMANCE to MAXIMUM AVAILABILITY.


• SQL> Alter database set primary database to maximize availability;

Shut down the PRIMARY database and then restart it. Check the protection mode on both PRIMARY and STANDBY databases.

• SQL> select protection_modefrom v$database.
It should return MAXIMUM AVAILABILITY.

Step 5:
Now start the DG CLI (DGMGRL) to start the broker.
C:\> dgmgrlDGMGRL for 32-bit Windows: Version 10.2.0.1.0 - ProductionCopyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.

DGMGRL>
DGMGRL> connect sys/oracle@primaryConnected.
Check any previously created configuration and remove it.

DGMGRL> show configuration;
If a configuration is displayed then remove it.

DGMGRL> remove configuration;
Removed.

Now create a new configuration.
DGMGRL> create configuration ‘broker1’ as
> primary database is ‘primary’
> connect identifier is to_primary;
Configuration “broker1” created with primary database “primary”

DGMGRL> add database ‘standby’ as
> connect identifier is to_standby
> maintained as physical;
Database “broker1” added

DGMGRL> show configuration;
Configuration
Name: broker1
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLE
Databases: primary - Physical standby database
standby - Primary database
Current status for "broker1":
DISABLE

DGMGRL> enable configuration;
Enabled.

DGMGRL> show configuration;
Configuration
Name: broker1
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLE
Databases:
primary - Physical standby database
standby - Primary database
Current status for "broker1":
SUCCESS
DGMGRL> edit database PRIMARY set property LogXptMode=’SYNC’;
Property "logxptmode" updated
DGMGRL> edit database STANDBY set property LogXptMode=’SYNC’;
Property "logxptmode" updated
DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show configuration;
Configuration
Name: dg1
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases: primary - Physical standby database
- Fast-Start Failover target
standby - Primary database

Current status for "dg1": Warning: ORA-16608: one or more databases have warnings

If you check this error by querying StatusReport property of the PRIMARY database, you will find that Fast Start Failover OBSERVER is not started.
Simply start the observer.
DGMGRL> show database primary StatusReport

Now start the observer.
DGMGRL> start observer
Observer Started.

The Observer has started its working. The current session of ‘cmd’ will not be returned to you because it is a foreground process. It will continuously observe PRIMARY site. To check the configuration, open another ‘cmd’ window and check the configuration.

DGMGRL> show configuration;
Configuration
Name: broker1
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: Enabled Databases: standby - Primary database primary - Physical standby database - Fast – Start Failover target
Fast-Start Failover

Threshold: 30 seconds Observer: orat

Current status for “broker1”:SUCCESS

At this stage every thing is going on normally. An identical copy of PRIMARY database is being made at the STANBY site. If due to any reason, PRIMARY database crashes, the observer will start its working.

DGMGRL> start observer
Observer started.15:21:21.69 Wednesday, January 07, 2009
Initiating fast-start failover to database “standby”…
Performing failover NOW, please wait…
Operation requires shutdown of instance “orcl” on database “standby”
Shutting down instance “orcl”…
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

Operation requires startup of instance “orcl” on database “standby” Starting instance “orcl”…


Data Guard With Real Time
Redo data is applied to the standby database as soon as it is received from the primary database.In Oracle9i Data Guard this apply has to wait till an archivelog is� created on the standby database For Redo Apply:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE


For SQL Apply:

ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE

When real time apply is enabled, RECOVERY_MODE column in V$ARCHIVE_DEST_STATUS displays “MANAGED REAL�TIME APPLY”


Real Time Apply – BenefitsStandby databases now more closely synchronized with the primary
More up-to-date, real-time reporting
Faster switchover and failover times
Reduces planned and unplanned downtime
Better Recovery Time Objective (RTO) for DR

Note


For Real Time We must create standby log files .Redo logs can be created even after the standby has been created

Create the SRL's :

SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 (‘C:\oracle\product\10.2.0\oradata\ORCL\StandbyRedo4.log’) SIZE 100M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 (‘C:\oracle\product\10.2.0\oradata\ORCL\StandbyRedo5.log’) SIZE 100M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 (‘C:\oracle\product\10.2.0\oradata\ORCL\StandbyRedo6.log’) SIZE 100M;

Converting To Real time
Real-time apply : When real-time apply is enabled, the log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. In this example we are going to use LGWR on the primary for redo transport just to prove that a committed record on the primary without switching a log will show up on the standby. However real-time apply will work with both LGWR and ARCH using SRL's.


Set up log_archive_dest_2 on the primary with LGWR ASYNC or SYNC

log_archive_dest_2='SERVICE=to_standby LGWR ASYNCVALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=standby '

shutdown and startup the primary or if done dynamically switch a log file
You will see the following message in the alert log

*LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2

Enable Real Time

On the standby cancel out of the current managed recovery

SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Place it back in recovery with Real time apply
SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;


Test Case
Create a table on Primary.


SQL> create table test 2 ( name varchar2(30));Table created.

Now Insert some records and commit.

SQL> insert into test 2 values 3 ('Vikram'); 1 row created.SQL> commit;Commit complete.
Now on Standby Run these commands.

SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL>ALTER DATABASE OPEN READ ONLY;
Database altered.

Now check the table and record which you inserted.
SQL> select * from test;
NAME
------------------------------
Vikram

We can our table and record without any log switch.

After Checking your Real Time work Dont Forget to put your database back in recover mode ;)
Place the standby back in managed recover mode

SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;Database altered.

This will take the standby directly from read only mode and place it in managed recovery mode.