In the previous post (Building a sub $300/month Oracle RAC on AWS - Part I) we discussed the network setup required for Oracle RAC. In this post we will explore setting up shared storage required for Oracle RAC.
Oracle RAC is a shared storage system, every node in the cluster Read/Write to the the same storage system. In enterprise set up this space is dominated by enterprise storage companies like EMC, NetApp etc. EMC provides block storage for Oracle and NetApp provides Network Attached Storage for running shared storage for Oracle RAC.
We will explore using block storage in AWS to setup RAC. AWS is working on providing NFS or Elastic File System in their cloud. This is in beta currently and not generally available. Whenever I have access to EFS I will update this document on how to use EFS for RAC.
Let’s get familiar with some important terms that would help us understand the ISCSI storage server:
- iSCSI storage - iSCSI stands for “Internet Small Computer System Interface”. It’s basically block storage using TCP.
- iSCSI Initiator - Connects external iSCSI-based storage to hosts with an Ethernet network adapter over TCP.
- iSCSI Target - Host which acts as a storage device capable of providing shared block storage in the form of virtual hard disks (VHDs) to clients across a TCP/IP network.
In our case the Oracle RAC instances would be iSCSI initiators and storage instance iSCSI target.
For more details: http://searchstorage.techtarget.com/definition/iSCSI
Setup iSCSI Target
Setup Storage Instance
Let’s spin our first AWS EC2 instance which is act as our iSCSI target or the storage server.
In our effort to remain cheap let’s spin a “t2.micro” instance with Ubuntu OS.
What is EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. Details can be found at: https://aws.amazon.com/ec2
EC2 instance types can be found at: https://aws.amazon.com/ec2/instance-types/
T2.micro instance has 1 CPU and 1GB memory.
AWS command-line provides an easy way to generate a skeleton JSON file which can be used to define all instance attributes required to start a new EC2 instance.
$ aws ec2 run-instances --generate-cli-skeleton
Its best to redirect the output of this command to a file, which we can then edit as per our requirements.
$ aws ec2 run-instances --generate-cli-skeleton > storage01.json
I have updated the JSON file as per our requirements to build a t2.micro instance.
{
"DryRun": false,
"ImageId": "ami-9abea4fb",
"MinCount": 1,
"MaxCount": 1,
"KeyName": "OracleRACKeyPair",
"SecurityGroupIds": [
"sg-eca3a58b"
],
"InstanceType": "t2.micro",
"Placement": {
"AvailabilityZone": "us-west-2c"
},
"BlockDeviceMappings": [
{
"VirtualName": "root",
"DeviceName" : "/dev/sda1",
"Ebs": {
"VolumeSize": 8,
"DeleteOnTermination": true,
"VolumeType": "standard"
}
},
{
"VirtualName": "Shared-storage01",
"DeviceName" : "/dev/xvdb",
"Ebs": {
"VolumeSize": 150,
"DeleteOnTermination": true,
"VolumeType": "standard"
}
}
],
"Monitoring": {
"Enabled": false
},
"SubnetId": "subnet-dac3dd83",
"DisableApiTermination": true,
"InstanceInitiatedShutdownBehavior": "stop",
"PrivateIpAddress": "10.0.0.51",
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"AssociatePublicIpAddress": true
}
]
}
Note: the values of KeyName, SecurityGroupIds, SubnetId have been taken from the resources we created in Part 1. The PrivateIpAddress is taken from the DNS JSON we created, with property name storage01.oracleraczone.net.
After the JSON file is updated, running the following command will spawn our storage instance.
$ aws ec2 run-instances --cli-input-json file://storage01.json
This would output a long JSON output, search for InstanceId and note the value associated with it.
Let’s add a name tag to our instance with value iSCSI-storage01
$ aws ec2 create-tags --resources i-f2e31b2f --tags Key=Name,Value=iSCSI-storage01
Now that we have an instance running let’s ssh to the newly created ubuntu instance and install the required iSCSI software to make it storage server. Get the instance id from the JSON output from the instance creation command.
$ aws ec2 describe-instances --instance-ids i-f2e31b2f |grep PublicDnsName
"PublicDnsName": "ec2-52-39-23-189.us-west-2.compute.amazonaws.com",
To enable 22 to the instance, add inbound rule to your security group:
$ aws ec2 authorize-security-group-ingress --group-id sg-eca3a58b --protocol tcp --port 22 --cidr 0.0.0.0/0
Ssh to the instance:
$ssh -i "OracleRACKeyPair.pem" ubuntu@ec2-52-39-23-189.us-west-2.compute.amazonaws.com
Installing iSCSI software:
sudo apt-get update
sudo apt-get upgrade
Sudo apt-get install iscsitarget
Update iSCSI default properties:
root@ip-10-0-0-227:/etc/tgt/conf.d# sudo vi /etc/default/iscsitarget
ISCSITARGET_ENABLE=true
ISCSITARGET_MAX_SLEEP=3
Restart the service to enable the setting:
$ sudo service iscsitarget restart
Install logical volume manager for ubuntu:
$ apt-get install lvm2
Let’s create a new volume using the 150G disk we added while creating the instance.
$ sudo pvcreate /dev/xvdb
Physical volume "/dev/xvdb" successfully created
$ sudo vgcreate oraclerac-data /dev/xvdb
Volume group "oraclerac-data" successfully created
$ sudo lvcreate oraclerac-data --size 149g --stripes 1 --name datalvol
Logical volume "datalvol" created
Update iSCSI config to scan this newly created volume and present it as network block device.
Add the following to /etc/iet/ietd.conf:
Target iqn.2016-05.com.amazon:storage.datavol0
Lun 0 Path=/dev/oraclerac-data/datalvol,Type=fileio,ScsiId=lun0,ScsiSN=lun0
$ sudo service iscsitarget restart
* Removing iSCSI enterprise target devices:[ OK ]
* Stopping iSCSI enterprise target service:[ OK ]
* Removing iSCSI enterprise target modules:[ OK ]
* Starting iSCSI enterprise target service:[ OK ]
Setup Multicast IP for Oracle RAC
Multicasting in networking terms is a group communication technique. It’s a one-to-many communication over network. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple receivers only when necessary. (https://en.wikipedia.org/wiki/Multicast).
Oracle Grid Infrastructure 11.2.0.2 introduces a new feature called "Redundant Interconnect Usage", which provides an Oracle internal mechanism to make use of physically redundant network interfaces for the Oracle (private) interconnect. As part of this new feature, multicast based communication on the private interconnect is utilized to establish communication with peers in the cluster on each startup of the stack on a node. Once the connection with the peers in the cluster has been established, the communication is switched back to unicast. Per default, the 230.0.1.0 address (port 42424) on the private interconnect network is used for multicasting. (https://community.oracle.com/thread/2398409?tstart=0).
EC2 doesn’t provide an option to setup multicast out of the box. To setup a network for the interconnect we use point-2-point VPN for the RAC nodes using N2N. N2N enable instances to be members of a community which supports multicast IP among its members. Each member in the community run the edge component of N2N and get information about the other instances from the master or supernode.
We can use our storage server to act as the supernode for this N2N setup and overcome the lack of multicast in EC2. (https://www.buckhill.co.uk/blog/how-to-enable-broadcast-and-multicast-on-amazon-aws-ec2/2#.VzGI4pMrJ0s)
Install subversion:
$ sudo apt-get install subversion
Download and install N2N:
$ sudo svn co https://svn.ntop.org/svn/ntop/trunk/n2n
$ cd n2n/n2n_v2
Disable encryption:
$ sudo vi Makefile
Search for N2N_OPTION_AES and update it to N2N_OPTION_AES=no
Disable Compression in n2n.h:
Search for “#define N2N_COMPRESSION_ENABLED 1” and change it to “#define N2N_COMPRESSION_ENABLED 0”
$ sudo make
$ sudo make install
Create an autostart script for supernode:
$ sudo vi /etc/init.d/supernode-start
#!/bin/sh
/usr/sbin/supernode -l 1200
$ sudo chmod +x /etc/init.d/supernode-start
$ cd /etc/rc0.d
$ sudo update-rc.d supernode-start defaults
$ sudo /etc/init.d/supernode-start
$ ps -ef|grep supernode
root 7972 1 0 07:22 ? 00:00:00 /usr/sbin/supernode -l 1200
Most of the above can be achieved using GUI. Adding screenshots for reference:
Creating EC2 install, click on EC2 from the AWS dashboard. |
Click on “Launch Instance”. |
Select the ubuntu image or the list. |
Select t2.micro instance for the instance types. |
Click “launch”. |
Select the correct Key-Pair and Launch instance. |
Review the status screen and click “View Instances”. |
After a while we can see an instance running. |
0 Comments:
Post a Comment