Tuesday, June 7, 2016

Building a sub $300/month Oracle RAC on AWS - Part III

In the previous two post (Building a sub $300/month Oracle RAC on AWS - Part I, Building a sub $300/month Oracle RAC on AWS - Part II) we discussed how to setup the network infrastructure, storage infrastructure and multicast setup for Oracle RAC.
In this post we will setup the Oracle RAC cluster nodes, install Oracle Grid Clusterware, Oracle RAC Database software and create a working 2-node Oracle RAC database that would cost us less than $300/month.


We will create a single node Oracle RAC database and then add the 2nd node.

Setup First Oracle RAC Node

We will be setting up a t2.large EC2 instance type with 2 CPU and 8GB RAM.
Please refer to the previous post regarding the creating the JSON file for spawning new instance.
Below the JSON file I am using for creating this instance (node1.json).


{
   "DryRun": false,
   "ImageId": "ami-ebbc438b",
   "MinCount": 1,
   "MaxCount": 1,
   "KeyName": "OracleRACKeyPair",
   "SecurityGroupIds": [
       "sg-eca3a58b"
   ],
   "InstanceType": "t2.large",
   "Placement": {
       "AvailabilityZone": "us-west-2c"
   },
   "BlockDeviceMappings": [
       {
           "VirtualName": "root",
           "DeviceName"  : "/dev/sda1",
           "Ebs": {
               "VolumeSize": 200,
               "DeleteOnTermination": true,
               "VolumeType": "standard"
           }
       },
       {
           "VirtualName": "Shared-storage01",
           "DeviceName"  : "/dev/xvdb",
           "Ebs": {
               "VolumeSize": 20,
               "DeleteOnTermination": true,
               "VolumeType": "standard"
           }
       }
   ],
   "Monitoring": {
       "Enabled": false
   },
   "SubnetId": "subnet-dac3dd83",
   "DisableApiTermination": true,
   "InstanceInitiatedShutdownBehavior": "stop",
   "PrivateIpAddress": "10.0.0.11",
   "NetworkInterfaces": [
       {
           "DeviceIndex": 0,
           "AssociatePublicIpAddress": true
       }
   ]
}


Run the following command to spawn the first RAC instance.


$ aws ec2 run-instances --cli-input-json file://node1.json


This would output a long JSON output, search for InstanceId and note the value associated with it.
Let’s add a name tag to our instance with value racnode01.


$ aws ec2 create-tags --resources i-b91ad764 --tags Key=Name,Value=racanode01


Add the Virtual-IP, and SCAN IPs to this newly created instance.


First find out the network interface name for the instance:


$ aws ec2 describe-instances --instance-ids i-b91ad764|grep NetworkInterfaceId
                           "NetworkInterfaceId": "eni-8bb366d7",


Assign the IP addresses to the network interface:

$ aws ec2 assign-private-ip-addresses --network-interface-id eni-8bb366d7 --private-ip-addresses 10.0.0.21
$ aws ec2 assign-private-ip-addresses --network-interface-id eni-8bb366d7 --private-ip-addresses 10.0.0.31
$ aws ec2 assign-private-ip-addresses --network-interface-id eni-8bb366d7 --private-ip-addresses 10.0.0.32
$ aws ec2 assign-private-ip-addresses --network-interface-id eni-8bb366d7 --private-ip-addresses 10.0.0.33


SSH to the host to continue setup:


ssh -i "OracleRACKeyPair.pem" ec2-user@ec2-52-24-81-69.us-west-2.compute.amazonaws.com

Setup SWAP space on the instance:

[root@ip-10-0-0-11 dev]# /sbin/pvcreate -f /dev/xvdb
 Physical volume "/dev/xvdb" successfully created
[root@ip-10-0-0-11 dev]# /sbin/vgcreate swapvol /dev/xvdb
 Volume group "swapvol" successfully created
[root@ip-10-0-0-11 dev]# /sbin/lvcreate swapvol --size 20g --stripes 1 --name swapvol
 Logical volume "swapvol" created.
[root@ip-10-0-0-11 ec2-user]# mkswap /dev/swapvol/swapvol
Setting up swapspace version 1, size = 20971516 KiB
no label, UUID=6cca2252-7a25-4abc-9bb7-ad48a587ae5a
[root@ip-10-0-0-11 ec2-user]# swapon -va
swapon /dev/mapper/swapvol-swapvol
swapon: /dev/mapper/swapvol-swapvol: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/mapper/swapvol-swapvol: pagesize=4096, swapsize=21474836480, devsize=21474836480


Add the following to /etc/fstab and reboot:


/dev/swapvol/swapvol            swap         swap            defaults        0 0


Rename the host/instance:

  • Add HOSTNAME=”racnode01.oracleraczone.net” to /etc/sysconfig/network
  • Add DHCP_HOSTNAME="racnode01.oracleraczone.net" to /etc/sysconfig/network-scripts/ifcfg-eth0
  • Add “host-name racnode01.oracleraczone.net” to /etc/dhcp/dhclient-eth0.conf
  • echo racnode01.oracleraczone.net > /etc/hostname
  • hostname racnode01.oracleraczone.net
  • Add to /etc/hosts
    127.0.0.1  racnode01.oracleraczone.net racnode01 localhost localhost.oracleraczone.net
    10.0.0.11 racnode01.oracleraczone.net racnode01
  • Add “preserve_hostname: true” to /etc/cloud/cloud.cfg


Reboot the host to make sure the changes take effect.

Connect the instance to the storage:

Let’s mount the block storage we configured in the previous post to this instance.
As root:
  1. Discover the iSCSI storage device
    # iscsiadm --mode discoverydb --type sendtargets --portal 10.0.0.51 --discover
10.0.0.51:3260,1 iqn.2016-05.com.amazon:storage.datavol0
  1. Query the configs
    # iscsiadm --mode node --targetname iqn.2016-05.com.amazon:storage.datavol0 --portal 10.0.0.51:3260
  2. Login/Connect to the iSCSI device
    # iscsiadm --mode node --targetname iqn.2016-05.com.amazon:storage.datavol0 --portal 10.0.0.51:3260 --login
Logging in to [iface: default, target: iqn.2016-05.com.amazon:storage.datavol0, portal: 10.0.0.51,3260] (multiple)
Login to [iface: default, target: iqn.2016-05.com.amazon:storage.datavol0, portal: 10.0.0.51,3260] successful.
  1. Update /etc/iscsi/initiatorname.iscsi with the correct InitiatorName
    InitiatorName=iqn.2016-05.com.amazon:storage.datavol0
  2. The block device should be visible now
    [root@racnode01 dev]# ls /dev/sd?
/dev/sda
  1. Partition the disk
[root@racnode01 dev]# fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).


Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x46bf3811.


Command (m for help): n
Partition type:
  p   primary (0 primary, 0 extended, 4 free)
  e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-312475647, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-312475647, default 312475647):
Using default value 312475647
Partition 1 of type Linux and of size 149 GiB is set


Command (m for help): w
The partition table has been altered!


Calling ioctl() to re-read partition table.
Syncing disks.
[root@racnode01 dev]# ls /dev/sda?
/dev/sda1

Configure Oracle ASM:

  1. Install oracleasm rpm
    root@racnode01 dev]# yum install kmod-oracleasm.x86_64 -y
  2. Update /etc/sysconfig/oracleasm
    # ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true
# ORACLEASM_UID: Default UID owning the /dev/oracleasm mount point.
ORACLEASM_UID=500
# ORACLEASM_GID: Default GID owning the /dev/oracleasm mount point.
ORACLEASM_GID=500
# ORACLEASM_SCANBOOT: 'true' means fix disk perms on boot
ORACLEASM_SCANBOOT=true
  1. Restart ASM lib
    [root@racnode01 dev]# oracleasm exit
    [root@racnode01 dev]# oracleasm init
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
  1. Create DATA asmdisk
    [root@racnode01 dev]# oracleasm createdisk DATA /dev/sda1
Writing disk header: done
Instantiating disk: done

[root@racnode01 disks]# pwd
/dev/oracleasm/disks
[root@racnode01 disks]# ls -lrt
total 0
brw-rw---- 1 oracle dba 8, 1 May 18 13:45 DATA

Setup N2N-edge for Oracle interconnect:

  1. Create the following startup script /etc/init.d/n2n-edge.init:
    #! /bin/bash
    ### BEGIN INIT INFO
    # Provides:          n2n-edge
    # Required-Start:    $remote_fs $syslog
    # Required-Stop:     $remote_fs $syslog
    # Default-Start:     2 3 4 5
    # Default-Stop:      0 1 6
    # Short-Description: n2n-edge
    # Description:       Start n2n edge node daemon
    ### END INIT INFO

    N2N_IP=10.1.0.21
    N2N_COMMUNITY=OracleRAC
    N2N_SUPERNODE=10.0.0.51:1200
    prog=edge
    exec=/usr/sbin/edge

    case "$1" in
       start)
           [ -x $exec ] || exit 5
           echo -n "Starting $prog: "
           $exec -v -a $N2N_IP -c $N2N_COMMUNITY -l $N2N_SUPERNODE -E >> /var/log/edge 2>&1 || (echo "[FAILED]";exit 6)
           echo "[OK]"
           ;;
       stop)
           echo -n "Stopping $prog: "
           kill `ps -ef | grep $prog | grep $N2N_COMMUNITY | awk '{print $2}'` || (echo "[FAILED]";exit 0)
           echo "[OK]"
           ;;
       restart)
           echo -n "Restarting $prog: "
           kill `ps -ef | grep $prog | grep $N2N_COMMUNITY | awk '{print $2}'` || (echo "[FAILED]";exit 0)
           echo "STOPPED"
           $exec -v -a $N2N_IP -c $N2N_COMMUNITY -l $N2N_SUPERNODE -E >> /var/log/edge 2>&1 || (echo "[FAILED]";exit 6)
           echo "STARTED"
           echo "[OK]"
           ;;
       *)
           echo $"Usage: $0 {start|stop|status|restart|reload|force-reload}"
           exit 2
    esac
exit $?
  1. Make the script executable
    [root@racnode01 init.d]# chmod +x n2n-edge.init
  2. Add to system startup
    [root@racnode01 init.d]# chkconfig --add n2n-edge.init
[root@racnode01 init.d]# chkconfig n2n-edge.init on
  1. Start the service
    [root@racnode01 init.d]# service n2n-edge.init start
Starting edge: [OK]

Generate ssh keys for Oracle user

[oracle@racnode01 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
1b:fa:44:d0:7b:7f:cd:f8:b4:ca:79:2b:06:92:8d:0e oracle@racnode01.oracleraczone.net
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|       .         |
|      . .        |
|       . .       |
|        S =      |
|       E B +   + |
|      . = . o o +|
|       o .  .+.+.|
|        .   .++oo|
+-----------------+


cat /home/oracle/.ssh/id_rsa.pub > /home/oracle/.ssh/authorized_keys

Oracle RAC Installation

Click “See All” for  Linux x86-64.
Download both DB and Grid installation files:
Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.2.0) for Linux x86-64
Oracle Database 12c Release 1 (12.1.0.2.0) for Linux x86-64


Copy this files over to the EC2 instance under /home/oracle/software
Unzip the files (you may need to install unzip, yum install unzip -y).

Install Oracle Clusterware Software

Run the grid installer:
[oracle@racnode01 ~]$ /home/oracle/software/grid/runInstaller -silent -ignoreprereq -responsefile /home/oracle/response/grid.rsp
Starting Oracle Universal Installer...


Checking Temp space: must be greater than 415 MB.   Actual 191784 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 20479 MB    Passed
………….
You can find the log of this install session at:
/home/oracle/oraInventory/logs/installActions2016-05-18_05-29-56PM.log


Tail the log file to see if there are any error.


As the installer completes, you should see 2 messages:
As a root user, execute the following script(s):
1. /home/oracle/oraInventory/orainstRoot.sh
2. /home/oragrid/product/12.1/root.sh


[root@racnode01 oracle]# /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.


Changing groupname of /home/oracle/oraInventory to dba.
The execution of the script is complete.
[root@racnode01 oracle]# /home/oragrid/product/12.1/root.sh
Check /home/oragrid/product/12.1/install/root_racnode01.oracleraczone.net_2016-05-18_17-42-48.log for the output of root script
[root@racnode01 oracle]# tail -f /home/oragrid/product/12.1/install/root_racnode01.oracleraczone.net_2016-05-18_17-42-48.log


Run this right after you get back the prompt as oracle user:


/home/oragrid/product/12.1/crs/config/config.sh -silent -ignoreprereq -responsefile /home/oracle/response/gridconfig.rsp


[oracle@racnode01 .ssh]$ /home/oragrid/product/12.1/crs/config/config.sh -silent -ignoreprereq -responsefile /home/oracle/response/gridconfig.rsp


As a root user, execute the following script(s):
1. /home/oragrid/product/12.1/root.sh


Execute /home/oragrid/product/12.1/root.sh on the following nodes:
[racnode01]


Successfully Setup Software.
[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
  ACTION: Refer to the logs or contact Oracle Support Services.
As install user, execute the following script to complete the configuration.
1. /home/oragrid/product/12.1/cfgtoollogs/configToolAllCommands RESPONSE_FILE=


Note:
1. This script must be run on the same host from where installer was run.
2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).


[root@racnode01 oracle]# /home/oragrid/product/12.1/root.sh
Check /home/oragrid/product/12.1/install/root_racnode01.oracleraczone.net_2016-05-18_18-02-05.log for the output of root script


[oracle@racnode01 ~]$ /home/oragrid/product/12.1/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/home/oracle/response/gridconfig.rsp
Setting the invPtrLoc to /home/oragrid/product/12.1/oraInst.loc


perform - mode is starting for action: configure

Add the Oracle VIPs and SCAN IP

export ORACLE_HOME=/home/oragrid/product/12.1
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=+ASM1


VIP:
crsctl add resource racnode01-v.ec2 -type cluster_resource -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01,AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.racnode01.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.racnode01.vip)"
SCAN:
crsctl add resource oraclerac-scan1.ec2 -type cluster_resource -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01,AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.scan1.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.scan1.vip)"
crsctl add resource oraclerac-scan2.ec2 -type cluster_resource -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01,AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.scan2.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.scan2.vip)"
crsctl add resource oraclerac-scan3.ec2 -type cluster_resource -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01,AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.scan3.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.scan3.vip)"


ASM Configs:
[oracle@racnode01 ~]$ sqlplus / as sysasm


SQL*Plus: Release 12.1.0.2.0 Production on Wed May 18 18:31:04 2016


Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options


SQL> alter system set cluster_interconnects='10.1.0.21' scope=spfile sid='+ASM1';


System altered.


SQL> alter system set cluster_interconnects='10.1.0.22' scope=spfile sid='+ASM2';


System altered.


SQL> select name from v$asm_diskgroup;


NAME
------------------------------
DATA


Try to restart CRS and the host to make sure everything is working fine.
As root:
/home/oragrid/product/12.1/bin/crsctl stop crs
/home/oragrid/product/12.1/bin/crsctl start crs
/home/oragrid/product/12.1/bin/crs_stat -t

Install Oracle Database Software

[oracle@racnode01 ~]$ /home/oracle/software/database/runInstaller -silent -ignoreprereq -responsefile /home/oracle/response/db.rsp
Starting Oracle Universal Installer...


Checking Temp space: must be greater than 500 MB.   Actual 184146 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 20479 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-05-18_07-34-39PM. Please wait ...[oracle@racnode01 ~]$ [WARNING] [INS-32008] Oracle base location cant be same as the user home directory.
  CAUSE: The specified Oracle base is same as the user home directory.
  ACTION: Provide an Oracle base location other than the user home directory.
You can find the log of this install session at:
/home/oracle/oraInventory/logs/installActions2016-05-18_07-34-39PM.log


As a root user, execute the following script(s):
1. /home/oracle/product/12.1/root.sh


Execute /home/oracle/product/12.1/root.sh on the following nodes:
[racnode01]


Run /home/oracle/product/12.1/root.sh as root.


This completed the software installation for Oracle.

Database Creation

As Oracle user:
[oracle@racnode01 ~]$ . oraenv
ORACLE_SID = [oracle] ? orcl
ORACLE_HOME = [/home/oracle] ? /home/oracle/product/12.1
The Oracle base has been set to /home/oracle


Run Database Configuration Assistant to create the database:
[oracle@racnode01 ~]$ dbca -silent -responsefile /home/oracle/response/dbca.rsp
Copying database files
1% complete
3% complete
9% complete
15% complete
21% complete
27% complete
30% complete
Creating and starting Oracle instance
32% complete
36% complete
40% complete
44% complete
48% complete
49% complete
52% complete
Creating cluster database views
54% complete
72% complete
Completing Database Creation
75% complete
78% complete
87% complete
90% complete
99% complete
100% complete
Look at the log file "/home/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.


[oracle@racnode01 ~]$ ps -ef|grep pmon
oracle    3227     1  0 May18 ?        00:00:05 asm_pmon_+ASM1
oracle   26018     1  0 19:25 ?        00:00:00 ora_pmon_orcl1


Setup Second Oracle RAC Node

To setup the 2nd RAC node, we need to follow the same steps we did for node1, except for the Oracle software installation.
Node2.json (change the IP address):
{
   "DryRun": false,
   "ImageId": "ami-ebbc438b",
   "MinCount": 1,
   "MaxCount": 1,
   "KeyName": "OracleRACKeyPair",
   "SecurityGroupIds": [
       "sg-eca3a58b"
   ],
   "InstanceType": "t2.large",
   "Placement": {
       "AvailabilityZone": "us-west-2c"
   },
   "BlockDeviceMappings": [
       {
           "VirtualName": "root",
           "DeviceName"  : "/dev/sda1",
           "Ebs": {
               "VolumeSize": 200,
               "DeleteOnTermination": true,
               "VolumeType": "standard"
           }
       },
       {
           "VirtualName": "Shared-storage01",
           "DeviceName"  : "/dev/xvdb",
           "Ebs": {
               "VolumeSize": 20,
               "DeleteOnTermination": true,
               "VolumeType": "standard"
           }
       }
   ],
   "Monitoring": {
       "Enabled": false
   },
   "SubnetId": "subnet-dac3dd83",
   "DisableApiTermination": true,
   "InstanceInitiatedShutdownBehavior": "stop",
   "PrivateIpAddress": "10.0.0.12",
   "NetworkInterfaces": [
       {
           "DeviceIndex": 0,
           "AssociatePublicIpAddress": true
       }
   ]
}


$ aws ec2 run-instances --cli-input-json file://node2.json
$ aws ec2 create-tags --resources i-60e22bbd --tags Key=Name,Value=racanode02
$ aws ec2 describe-instances --instance-ids i-60e22bbd|grep NetworkInterfaceId
                           "NetworkInterfaceId": "eni-0e15ca52",
$ aws ec2 assign-private-ip-addresses --network-interface-id eni-0e15ca52 --private-ip-addresses 10.0.0.22


SSH to the host to continue setup:
$ ssh -i "OracleRACKeyPair.pem" ec2-user@ec2-52-11-160-98.us-west-2.compute.amazonaws.com


Adding SWAP:
[ec2-user@ip-10-0-0-12 ~]$ sudo su
[root@ip-10-0-0-12 ec2-user]# /sbin/pvcreate -f /dev/xvdb
 Physical volume "/dev/xvdb" successfully created
[root@ip-10-0-0-12 ec2-user]# /sbin/vgcreate swapvol /dev/xvdb
 Volume group "swapvol" successfully created
[root@ip-10-0-0-12 ec2-user]# /sbin/lvcreate swapvol --size 20g --stripes 1 --name swapvol
 Logical volume "swapvol" created.
[root@ip-10-0-0-12 ec2-user]# mkswap /dev/swapvol/swapvol
Setting up swapspace version 1, size = 20971516 KiB
no label, UUID=d5bd617a-0337-4355-a043-e88300af5192
[root@ip-10-0-0-12 ec2-user]#
[root@ip-10-0-0-12 ec2-user]# swapon -va
swapon /dev/mapper/swapvol-swapvol
swapon: /dev/mapper/swapvol-swapvol: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/mapper/swapvol-swapvol: pagesize=4096, swapsize=21474836480, devsize=21474836480


Rename the host/instance:

  • Add HOSTNAME=”racnode02.oracleraczone.net” to /etc/sysconfig/network
  • Add DHCP_HOSTNAME="racnode02.oracleraczone.net" to /etc/sysconfig/network-scripts/ifcfg-eth0
  • Add “host-name racnode02.oracleraczone.net” to /etc/dhcp/dhclient-eth0.conf
  • echo racnode02.oracleraczone.net > /etc/hostname
  • hostname racnode02.oracleraczone.net
  • Add to /etc/hosts
    127.0.0.1  racnode02.oracleraczone.net racnode02 localhost localhost.oracleraczone.net
    10.0.0.12 racnode02.oracleraczone.net racnode02
  • Add “preserve_hostname: true” to /etc/cloud/cloud.cfg
Reboot the host to make sure the changes take effect.
[ec2-user@racnode02 ~]$ hostname
Racnode02.oracleraczone.net

Connect the instance to the storage:

Let’s mount the block storage we configured in the previous post to this instance.
As root:
  1. Discover the iSCSI storage device
    # iscsiadm --mode discoverydb --type sendtargets --portal 10.0.0.51 --discover
10.0.0.51:3260,1 iqn.2016-05.com.amazon:storage.datavol0
  1. Login/Connect to the iSCSI device
    # iscsiadm --mode node --targetname iqn.2016-05.com.amazon:storage.datavol0 --portal 10.0.0.51:3260 --login
Logging in to [iface: default, target: iqn.2016-05.com.amazon:storage.datavol0, portal: 10.0.0.51,3260] (multiple)
Login to [iface: default, target: iqn.2016-05.com.amazon:storage.datavol0, portal: 10.0.0.51,3260] successful.
  1. Update /etc/iscsi/initiatorname.iscsi with the correct InitiatorName
    InitiatorName=iqn.2016-05.com.amazon:storage.datavol0
  2. We can see the devices on node2 now
    [root@racnode02 dev]# ls /dev/sda*
/dev/sda  /dev/sda1

Configure Oracle ASM:

  1. Install oracleasm rpm
    root@racnode02 dev]# yum install kmod-oracleasm.x86_64 -y
  2. Update /etc/sysconfig/oracleasm
    # ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true
# ORACLEASM_UID: Default UID owning the /dev/oracleasm mount point.
ORACLEASM_UID=500
# ORACLEASM_GID: Default GID owning the /dev/oracleasm mount point.
ORACLEASM_GID=500
# ORACLEASM_SCANBOOT: 'true' means fix disk perms on boot
ORACLEASM_SCANBOOT=true
  1. Restart ASM lib
    [root@racnode02 dev]# oracleasm exit
    [root@racnode02 dev]# oracleasm init
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
  1. Scan ASM disk
    [root@racnode02 ec2-user]# ls -lrt /dev/oracleasm/disks/
total 0
[root@racnode02 ec2-user]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk "DATA"
[root@racnode02 ec2-user]# ls -lrt /dev/oracleasm/disks/
total 0
brw-rw---- 1 oracle dba 8, 1 May 19 21:04 DATA

Setup N2N-edge for Oracle interconnect:

Please refer to the instructions used for node1. Update N2N_IP=10.1.0.22

Setup Passwordless SSH

SCP file /home/oracle/.ssh/authorized_keys from both the nodes to your local filesystem.
You would need to copy the file to temp and provide RW access to ec2-user first.


[oracle@racnode01 .ssh]$ cp authorized_keys /tmp/authorized_keys_node1
[oracle@racnode02 .ssh]$ cp authorized_keys /tmp/authorized_keys_node2


Copy to local machine:
$ scp -i "OracleRACKeyPair.pem" ec2-user@ec2-52-39-119-153.us-west-2.compute.amazonaws.com:/tmp/authorized_keys_node1 .
authorized_keys_node1                                                                                                                                                   100%  416     0.4KB/s   00:00
$ scp -i "OracleRACKeyPair.pem" ec2-user@ec2-52-11-160-98.us-west-2.compute.amazonaws.com:/tmp/authorized_keys_node2 .
authorized_keys_node2                                                                                                                                                   100%  416     0.4KB/s   00:00


Merge the files as authorized_keys and SCP back to the hosts:
$ cat authorized_keys_node1 > authorized_keys
$ cat authorized_keys_node2 >> authorized_keys
$ scp -i "OracleRACKeyPair.pem" authorized_keys ec2-user@ec2-52-39-119-153.us-west-2.compute.amazonaws.com:/tmp/
authorized_keys                                                                                                                                                         100%  832     0.8KB/s   00:00
$ scp -i "OracleRACKeyPair.pem" authorized_keys ec2-user@ec2-52-11-160-98.us-west-2.compute.amazonaws.com:/tmp/
authorized_keys                                                                                                                                                         100%  832     0.8KB/s   00:00


Copy them back to /home/oracle/.ssh directory.


Try ssh to the nodes and it should not ask for a password.


Update /etc/hosts on both the nodes with:
10.0.0.11 racnode01.oracleraczone.net racnode01
10.1.0.21 racnode01-i.oracleraczone.net racnode01-i
10.0.0.21 racnode01-v.oracleraczone.net racnode01-v
10.0.0.12 racnode02.oracleraczone.net racnode02
10.1.0.22 racnode02-i.oracleraczone.net racnode02-i
10.0.0.22 racnode02-v.oracleraczone.net racnode02-v
10.0.0.31 oraclerac-scan1.oracleraczone.net
10.0.0.32 oraclerac-scan2.oracleraczone.net
10.0.0.33 oraclerac-scan3.oracleraczone.net

Add Node 2 to the RAC Cluster

As oracle user on Node1:
[oracle@racnode01 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /home/oracle
[oracle@racnode01 ~]$ echo $ORACLE_HOME
/home/oragrid/product/12.1


Run addnode.sh to add node2 to the cluster:
[oracle@racnode01 ~]$ /home/oragrid/product/12.1/addnode/addnode.sh -silent -ignoreprereq "CLUSTER_NEW_NODES={racnode02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode02-v}"
Starting Oracle Universal Installer...


Checking Temp space: must be greater than 120 MB.   Actual 178417 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 20276 MB    Passed


Prepare Configuration in progress.


Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
/home/oracle/oraInventory/logs/addNodeActions2016-05-20_01-55-49PM.log


Instantiate files in progress.


Instantiate files successful.
..................................................   14% Done.


Copying files to node in progress.


Copying files to node successful.
..................................................   73% Done.


Saving cluster inventory in progress.
..................................................   80% Done.


Saving cluster inventory successful.
The Cluster Node Addition of /home/oragrid/product/12.1 was successful.
Please check '/tmp/silentInstall.log' for more details.


Setup Oracle Base in progress.


Setup Oracle Base successful.
..................................................   88% Done.


As a root user, execute the following script(s):
1. /home/oracle/oraInventory/orainstRoot.sh
2. /home/oragrid/product/12.1/root.sh


Execute /home/oracle/oraInventory/orainstRoot.sh on the following nodes:
[racnode02]
Execute /home/oragrid/product/12.1/root.sh on the following nodes:
[racnode02]


The scripts can be executed in parallel on all the nodes.


..........
Update Inventory in progress.
..................................................   100% Done.


Update Inventory successful.
Successfully Setup Software.


Execute the following scripts as root on node2 and make sure they are successful.
1. /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.


Changing groupname of /home/oracle/oraInventory to dba.
The execution of the script is complete.


2. /home/oragrid/product/12.1/root.sh
Check /home/oragrid/product/12.1/install/root_racnode02.oracleraczone.net_2016-05-20_14-06-41.log for the output of root script


Update VIPs and SCAN to incorporate the Second Oracle RAC host:
Login to Node1 and run the following commands:
[oracle@racnode01 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /home/oracle
Add VIP for node2:
crsctl add resource racnode02-v.ec2 -type cluster_resource -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01 racnode02, AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.racnode02.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.racnode02.vip)"
Modify VIP for node1 to add node2:
crsctl modify resource racnode01-v.ec2 -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01 racnode02, AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.racnode01.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.racnode01.vip)"
Update SCAN to incorporate node2:
crsctl modify resource oraclerac-scan1.ec2 -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01 racnode02, AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.scan1.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.scan1.vip)"


crsctl modify resource oraclerac-scan2.ec2 -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01 racnode02, AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.scan2.vip),RELOCATE_BY_DEPENDENCY=1,STOP_DEPENDENCIES=hard(intermediate:ora.scan2.vip)"


crsctl modify resource oraclerac-scan3.ec2 -attr "CHECK_INTERVAL=10,PLACEMENT=favored,HOSTING_MEMBERS=racnode01 racnode02, AUTO_START=always,START_DEPENDENCIES=hard(intermediate:ora.scan3.vip),RELOCATE_BY_DEPENDENCY=1, STOP_DEPENDENCIES=hard(intermediate:ora.scan3.vip)"


Add the Oracle database to Node2:


As Oracle user on Node1:
[oracle@racnode01 ~]$ /home/oracle/product/12.1/addnode/addnode.sh -silent -ignoreprereq "CLUSTER_NEW_NODES={racnode02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode02-v}"
Starting Oracle Universal Installer...


Checking Temp space: must be greater than 120 MB.   Actual 178061 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 20266 MB    Passed


Prepare Configuration in progress.


Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
/home/oracle/oraInventory/logs/addNodeActions2016-05-20_06-25-23PM.log


Instantiate files in progress.


Instantiate files successful.
..................................................   14% Done.


Copying files to node in progress.


Copying files to node successful.
..................................................   73% Done.


Saving cluster inventory in progress.
..................................................   80% Done.


Saving cluster inventory successful.
The Cluster Node Addition of /home/oracle/product/12.1 was successful.
Please check '/tmp/silentInstall.log' for more details.


Setup Oracle Base in progress.


Setup Oracle Base successful.
..................................................   88% Done.


As a root user, execute the following script(s):
1. /home/oracle/product/12.1/root.sh


Execute /home/oracle/product/12.1/root.sh on the following nodes:
[racnode02]


..........
Update Inventory in progress.
..................................................   100% Done.


Update Inventory successful.
Successfully Setup Software.


From Node2 execute:
[root@racnode02 ec2-user]# /home/oracle/product/12.1/root.sh
Check /home/oracle/product/12.1/install/root_racnode02.oracleraczone.net_2016-05-20_19-06-45.log for the output of root script


Adding Database Instance on Node2


[oracle@racnode01 ~]$ . oraenv
ORACLE_SID = [orcl1] ? orcl1
ORACLE_HOME = [/home/oracle] ? /home/oracle/product/12.1
The Oracle base remains unchanged with value /home/oracle
[oracle@racnode01 ~]$ sqlplus / as sysdba


SQL*Plus: Release 12.1.0.2.0 Production on Fri May 20 19:14:22 2016


Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options


SQL> alter database add logfile thread 2 group 4 ('+DATA') size 50M, group 5 ('+DATA') size 50M, group 6 ('+DATA') size 50M;


Database altered.


SQL> alter database enable public thread 2;


Database altered.


SQL> create undo tablespace undotbs2 datafile '+DATA' size 65M autoextend on maxsize 30G;


Tablespace created.


SQL> alter system set undo_tablespace=undotbs2 scope=spfile sid='orcl2';


System altered.


SQL> alter system set instance_number=2 scope=spfile sid='orcl2';


System altered.


SQL> alter system set cluster_database_instances=2 scope=spfile sid='*';


System altered.


SQL> ALTER SYSTEM SET CLUSTER_INTERCONNECTS = '10.1.0.21' scope=spfile sid='orcl1';


System altered.


SQL> ALTER SYSTEM SET CLUSTER_INTERCONNECTS = '10.1.0.22' scope=spfile sid='orcl2';


System altered.


SQL> alter system reset CLUSTER_INTERCONNECTS scope=spfile sid='*';


System altered.


On Node2 create the init file for Oracle:


[oracle@racnode02 dbs]$ cp $ORACLE_HOME/dbs/initorcl1.ora $ORACLE_HOME/dbs/initorcl2.ora


Adding Instance to CRS:


On node1:
[oracle@racnode01 ~]$ srvctl add instance -d orcl -i orcl2 -n racnode02
[oracle@racnode01 ~]$ srvctl stop database -d orcl
[oracle@racnode01 dbs]$ srvctl start database -d orcl
[oracle@racnode01 dbs]$ srvctl status database -d orcl -v
Instance orcl1 is running on node racnode01. Instance status: Open.
Instance orcl2 is running on node racnode02. Instance status: Open.


Well we are finally done. It takes a while but everything falls into place eventually.


Enable traffic to port 1521 so that we can connect from outside of AWS:
$ aws ec2 authorize-security-group-ingress --group-id sg-eca3a58b --protocol tcp --port 1521 --cidr 0.0.0.0/0
This would enable anyone to connect to these hosts, so please make sure you provide the IP address range of your network.
There would be problems connecting to the public IP/public DNS from outside of AWS.
The workaround I have seen to work is:
  1. Stop listener
  2. Remove/rename listener.ora
  3. Let CRS restart the listener automatically
This should enable you to connect to the AWS oracle via sqlplus from your laptop:


$ sqlplus system/Ch3ap0RAC@52.39.119.153:1521/orcl
SQL*Plus: Release 11.2.0.4.0 Production on Mon May 23 12:18:41 2016
Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options


SQL> set lines 300
SQL> col HOST_NAME format a30
SQL> select inst_id,INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS from gv$instance;
  INST_ID INSTANCE_NAME    HOST_NAME   VERSION     STARTUP_T STATUS
---------- ---------------- ------------------------------ ----------------- --------- ------------
1 orcl1    racnode01.oracleraczone.net    12.1.0.2.0     20-MAY-16 OPEN
2 orcl2    racnode02.oracleraczone.net    12.1.0.2.0     20-MAY-16 OPEN

0 Comments: