jump to navigation

Creating a 3-node standby database. Part I 10 March 2010

Posted by David Alejo Marcos in ASM, Enterprise Manager, RAC, Standby.
Tags: , ,
trackback

As I promised, I will be writing on how I did create a 3-node standby database.

The goal is to move from a single instance database to a 3-node RAC. The steps we are following, as the project is still going, are:

1.- Create a 3-node standby database in the UK. This will be our primary database.

2.- Create a 3-node standby database in the US. This will be our standby database.

3.- Keep current standby up to date.

4.- Test switchover from current single instance production database to 3-node RAC in the UK, then to the 3-node RAC in the US, back to 3-node RAC in the UK and, finally, back to current primary.

5.- Do the switchover for real.

After the switchover, we will keep current production and current DR up to date for rollback.

In this post I will cover the first point. The installation of the software and configuration as standby in the US is quite similar to the configuration we followed for the UK.

I did not have to install CRS as our target servers have a 3-node RAC database currently running. We will be sharing those servers for 2 different databases.

The first step is to download the software from oracle. Our target was 10.2.0.4.1 for Linux X86-64.

Steps to follow:

Installation of Software:

1.- Install Oracle 10.2.0.1 (p4679769_10201_Linux-x86-64.zip).

If you choose the Advance Installation, the installer will identify the server as part of CRS and will prompt you to installed on all servers. My suggestion is to accept, will save you time and headaches.

2.- Install Oracle 10.2.0.4 (p6079224_10204_Linux-x86-64.zip).

3.- Install Opatch 11, necessary for patches and PSU.

4.- Install  PSU 10.2.0.4.1 (p8576156_10204_Linux-x86-64.zip).

Please, bear in mind that the PSU will be installed in rolling mode (all nodes on the CRS).

Once we have the required software on all three nodes we need the storage:

ASM:

We use ASM and ASMLib as standard for our Oracle databases. Once we have checked that we have the required LUNs we need to add them to ASMLib:

ls -lrt /dev/mapper to check the names and sizes:

ls -lr /dev/mapper:
…….
brw-rw—- 1 root disk 253, 16 Nov 28 11:34 bvm_fra
brw-rw—- 1 root disk 253, 15 Nov 28 11:34 bvm_data
brw-rw—- 1 root disk 253, 17 Nov 28 11:34 bvm_ctl_log

For ASMLib we need to connect as Root. The first step is to create Disks:

# cd /etc/init.d

# ./oracleasm createdisk DATA_BVM_1 /dev/mapper/bvm_data_1
Marking disk “DATA_BVM_1” as an ASM disk:                  [  OK  ]

# ./oracleasm createdisk DATA_BVM_2 /dev/mapper/bvm_data_2
Marking disk “DATA_BVM_2” as an ASM disk:                  [  OK  ]

# ./oracleasm createdisk DATA_BVM_3 /dev/mapper/bvm_data_3
Marking disk “DATA_BVM_3” as an ASM disk:                  [  OK  ]

# ./oracleasm createdisk FRA_BVM /dev/mapper/bvm_fra
Marking disk “FRA_BVM” as an ASM disk:                     [  OK  ]

# ./oracleasm createdisk LOGCTL_BVM_1 /dev/mapper/bvm_ctl_log_1
Marking disk “LOGCTL_BVM_1” as an ASM disk:                [  OK  ]

Then we check those disks are visible from ALL 3 nodes. I prefer to execute the following commands on all 3 nodes to avoid problems:

# ./oracleasm scandisks
# ./oracleasm listdisks

If you prefer, you can list the contents of /dev/oracleasm/disks.

Now we need to create the diskgroups.

For this we need to connect as sys on the ASM instance. We use external redundancy for our databases:

1.- Set the environment to ASM using oraenv and connect as sys.

2.- Create diskgroups:

SQL> create diskgroup DATA_BVM external redundancy disk ‘ORCL:DATA_BVM_1′,’ORCL:DATA_BVM_2′,’ORCL:DATA_BVM_3’;

Diskgroup created.

SQL> create diskgroup FRA_BVM external redundancy disk ‘ORCL:FRA_BVM’;

Diskgroup created.

SQL> create diskgroup LOGCTL_BVM external redundancy disk ‘ORCL:LOGCTL_BVM_1′;

Diskgroup created.

set linesize 180
col path form a50
select name, header_status, state, path from v$asm_disk;

NAME                       HEADER_STATU STATE    PATH
———————- ———— ——– ————————————————–
……….
DATA_BVM                   MEMBER       NORMAL   ORCL:DATA_BVM
FRA_BVM                      MEMBER       NORMAL   ORCL:FRA_BVM
LOGCTL_BVM                MEMBER       NORMAL   ORCL:LOGCTL_BVM

17 rows selected.

3.- Mount diskgroups on ALL ASM instances on the CRS.

SQL> alter diskgroup LOGCTL_BVM mount;

….

Now we can create the database.

There are different ways to create a database, DBCA, export-import, RMAN, etc.

The way I decided to create this standby database was using RMAN (duplicate target command); it is very straight forward.

the syntax I used is:

rman target sys/xxxx@prod
connect auxiliary /
run {
set until time “to_date(’06-MAR-2010 08:00:00′,’DD-MON-YYYY HH24:MI:SS’)” ;
ALLOCATE AUXILIARY CHANNEL c1 DEVICE TYPE disk;
ALLOCATE AUXILIARY CHANNEL c2 DEVICE TYPE disk;
duplicate target database to preprod nofilenamecheck;
}

you will need to have an entry on the tnsnames.ora to connect to production. Full backup was copied from production to one of the nodes.

On Part II, I will start with the dataguard configuration and the DR test.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: