Wednesday, September 23, 2009

Configuring and starting ons daemon in non RAC DB

Mostly used in case of Data Guard.

1.) Change directory to $ORACLE_HOME/opmn/conf


[oracle@TEST conf]$ pwd
/opt/app/oracle/10.2/opmn/conf
[oracle@TEST conf]$ ls -lrt
total 12
-rw-r--r-- 1 oracle oinstall 71 Feb 21 2006 ons.config.tmp
-rw------- 1 oracle oinstall 44 Oct 8 2008 ons.config.backup.10203
-rw------- 1 oracle oinstall 44 Oct 8 2008 ons.config

2.) Update the ons configuration file. (ons.config)

[oracle@TEST conf]$ more ons.config
localport=6100
remoteport=4200
loglevel=3
nodes=10.224.57.68:6200,10.224.56.38:6200 --> Mention the primary and standby servers with their ons ports
walletfile=$ORACLE_HOME/opmn/conf/ssl.wlt/default --> To enable SSL access between the ons services.

---The nodes parameter points to the primary and standby hosts followed by the remote port for the ONS daemon running on that port.
---The walletfile parameters point to the walletfile name. A wallet file is used by the Oracle Secure Sockets Layer (SSL) to store SSL certificates. If a wallet file is specified to ONS, it will use SSL when communicating with other ONS instances and require SSL certificate authentication from all ONS instances that try to connect to it. This means that if you want to turn on SSL for one ONS instance, then you must turn it on for all instances that are connected. Oracle recommends using SSL for all ONS communications.


3.) Start ONS

[oracle@TEST conf]$ onsctl start
onsctl: ons started

Configure and start ONS on both primary and standby.

Saturday, September 19, 2009

How To: Configure Data Guard Fast-Start Failover Step By Step

Note: Physcial Standby should be working fine and Data Guard Broker has already been configured and working fine.

How to Setup Data Guard
How To: Configure Data Guard Broker

1.) Set Local_Listener (Primary):

For the database to register with the data guard listener we need to get the local_listener parameter.
SQL> show parameter local_listener
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string

SQL> alter system set local_listener='(ADDRESS = (PROTOCOL = TCP)(HOST = 10.224.56.38)(PORT = 1521))';
System altered.

SQL> alter system register;
System altered.

SQL> show parameter local_listener
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string (ADDRESS = (PROTOCOL = TCP)(HOST = 10.224.56.38)(PORT = 1521))

2.) Verify force logging:

SQL> select force_logging from v$database;
FOR
---
YES

Should be YES.

3.) Should use spfile:
SQL> show parameter spfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /opt/app/oracle/10.2/dbs/spfileorcl.ora

4.) Should use a password file:

SQL> select * from v$pwfile_users;
USERNAME SYSDB SYSOP
----------------------------------------
SYS TRUE TRUE

5.) Remote login should be enabled:

SQL> show parameter remote_login_passwordfile
NAME TYPE VALUE
-------------------------------------------------
remote_login_passwordfile string EXCLUSIVE

6.) DB_UNIQUE_NAME should be set:

SQL> show parameter db_unique_name
NAME TYPE VALUE
-------------------------------------------------
db_unique_name string orcl

7.) Flash Recovery Ares should be set:

SQL> show parameter db_recovery_file
NAME TYPE VALUE
-----------------------------------------------------------------------------
db_recovery_file_dest string /opt/app/oracle/flash_recovery_area
db_recovery_file_dest_size big integer 2G

To set it, use the following commands:

alter system set db_recovery_file_dest_size = 2g scope=both;
alter system set db_recovery_file_dest = '' scope=both;

8.) Standby file management should be AUTO:

SQL> show parameter standby_file_management
NAME TYPE VALUE
--------------------------------------------------
standby_file_management string MANUAL

SQL> alter system set standby_file_management=auto;
System altered.

SQL> show parameter standby_file_management
NAME TYPE VALUE
-------------------------------------------------------
standby_file_management string AUTO

9.) log_archive_config should be set:

SQL> show parameter log_archive_config
NAME TYPE VALUE
-----------------------------------------------------------------------------
log_archive_config string DG_CONFIG=(orcl,orcl1)

10.) Standby Redo logs should be created:

SQL> select group#, type, member from v$logfile where type = 'STANDBY';
GROUP# TYPE MEMBER
-------------------------------------------------------------------------------------
3 STANDBY /opt/app/oracle/flash_recovery_area/orcl/onlinelog/o1_mf_3_5bvzkzgs_.log
4 STANDBY /opt/app/oracle/flash_recovery_area/orcl/onlinelog/o1_mf_4_5bvzl8hf_.log

10.) Enable flash back database:

alter database flashback on
*
ERROR at line 1:
ORA-38759: Database must be mounted by only one instance and not open.

Well need to mount the database to run this command.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 2684354560 bytes
Fixed Size 2086352 bytes
Variable Size 570427952 bytes
Database Buffers 2097152000 bytes
Redo Buffers 14688256 bytes
Database mounted.
SQL> alter database flashback on;
Database altered.

SQL> alter system set db_flashback_retention_target = 60 scope=both;
System altered.

SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES

SQL> show parameter db_flashback_retention_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 60

SQL> alter database set standby database to maximize availability;
Database altered.

SQL> select protection_mode from v$database;
PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY

Enable Flashback DB on Standby also.
SQL> alter database recover managed standby database cancel;
Database altered.

SQL> alter database flashback on;
Database altered.

SQL> alter system set db_flashback_retention_target = 60 scope=both;
System altered.

11.) Muliplex the data broker configuration files in 2 different disk, for protection.

SQL> show parameter dg_broker_config_file
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1 string /opt/app/oracle/10.2/dbs/dr1orcl.dat
dg_broker_config_file2 string /opt/app/oracle/10.2/dbs/dr2orcl.dat

SQL> alter system set dg_broker_config_file2='/opt/app/oracle/oradata/orcl/dr2orcl.dat';
alter system set dg_broker_config_file2='/opt/app/oracle/oradata/orcl/dr2orcl.dat'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16573: attempt to change configuration file for an enabled broker configuration

Well looks like the above problem is not easy to resolve. We need to disable the configuration, and also stop DMON process by doing alter system set dg_broker_start='false';

And then we can change the location of these files. Well I am not doing that right now. As I saw a bug filed on metalink that there are errors in restarting DMON after change in the file location. (BUG:5955795).

Well now doing the change of location.
SQL> ALTER SYSTEM SET DG_BROKER_START=FALSE;

System altered.
[oracle@APP3 dbs]$ mv dr2orcl.dat old.dr2orcl.dat
[oracle@APP3 dbs]$ mv dr1orcl.dat old.dr1orcl.dat

SQL> alter system set dg_broker_config_file2='/opt/app/oracle/oradata/orcl/dr2orcl.dat';
System altered.

SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE;
System altered.

SQL> show parameter dg_broker_config_file
NAME TYPE VALUE
-----------------------------------------------------------------------------
dg_broker_config_file1 string /opt/app/oracle/10.2/dbs/dr1regdb.dat
dg_broker_config_file2 string /opt/app/oracle/oradata/orcl/dr2orcl.dat


DGMGRL> create configuration 'DBTEST' as primary database is 'orcl' connect identifier is 'orcl';
Configuration "DBTEST" created with primary database "orcl"
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database

Current status for "DBTEST":
DISABLED

DGMGRL> ALTER CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY ;
Succeeded.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: NO
Protection Mode: MaxAvailability
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database

Current status for "DBTEST":
DISABLED

FYI very useful metalink 808783.1 (Step By Step How to Recreate Dataguard Broker Configuration)

12.) Edit database setting in data guard like

DGMGRL> edit database orcl set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database orcl set property NetTimeout=10;
Property "nettimeout" updated
DGMGRL> edit database orcl1 set property NetTimeout=10;
Property "nettimeout" updated
DGMGRL> enable configuration
Error: ORA-16627: operation disallowed since no standby databases would remain to support protection mode

Enable the LogXptMode to SYNC for standby also.
DGMGRL> edit database orcl1 set property LogXptMode='SYNC';
Property "logxptmode" updated

DGMGRL> enable configuration
Operation requires shutdown of instance "orcl" on database "orcl"
Shutting down instance "orcl"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "orcl" on database "orcl"

Startup the DB manually.

We I had some more errors:
Warning: ORA-16608: one or more databases have warnings
Error: ORA-16795: database resource guard detects that database re-creation is required

i did
DGMGRL> enable resource orcl1
Enabled.

Well did not help. I problem was grave, I needed standby redo log files in the standby DB also.
SQL> alter database recover managed standby database cancel;
Database altered.

SQL> alter database add standby logfile group 3 size 50M;
Database altered.

SQL> alter database add standby logfile group 4 size 50M;
Database altered.

GROUP# STATUS TYPE MEMBER IS_
--------------------------------------------------------------------------------- ---
1 ONLINE /home/oracle/oradata/orcl/redo01.log NO
2 ONLINE /home/oracle/oradata/orcl/redo02.log NO
3 STANDBY /home/oracle/oradata/flash_recovery_area/orcl/orcl1/onlinelog/o1_mf_3_5c3kdj01_.log YES
4 STANDBY /home/oracle/oradata/flash_recovery_area/orcl/orcl1/onlinelog/o1_mf_4_5c3kdw63_.log YES

I decide to start redo apply !!
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE disconnect;

Database altered.

Well some more errors: in data guard log.
RSM0: HEALTH CHECK WARNING: ORA-16826: apply service state is inconsistent with the DelayMins property
DGMGRL> show site verbose orcl
Site
Name: 'orcl'
Hostname: 'APP3'
Instance name: 'orcl'
Service Name: 'orcl'
Standby Type: 'physical'
Enabled: 'yes'
Required: 'yes'
Default state: 'PRIMARY'
Intended state: 'PRIMARY'
PFILE: ''
Number of resources: 1
Resources:
Name: orcl (default) (verbose name='orcl')
Current status for "orcl":
Warning: ORA-16570: operation requires restart of database "orcl"

Need to restart the primary DB.
Whenever you see any error in the "show configuration" output. Check "show site verbose " to find which site has what error.
Well after the restart of the primary database:
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database
- Fast-Start Failover target

Current status for "DBTEST":
SUCCESS

13.) Enable Fast-start failover:

DGMGRL> enable fast_start failover;
Enabled.

14.) Start Observer:
Idealy observer should be started on another host, to observe the primary, as if the host goes down it can monitor the outage and do specify actions.
DGMGRL> start observer;
Observer started
The session will seem to hang. You can do a control C. Or create and run the script as nohup.
eg.
#!/bin/sh

dgmgrl sys/oracle <start observer;
!
Another Way:
CentOS4.4-64:wrpws:oracle > nohup dgmgrl sys/oracle@orcl "start observer file='$ORACLE_HOME/dbs/fsfo.dat'" -logfile $HOME/observer.log &

And another way to start:
[oracle@APP3 scripts]$ dgmgrl -logfile $HOME/observer.log sys/oracle@orcl "start observer" &

15. Lets test the configuration:
SQL> select FS_FAILOVER_STATUS,FS_FAILOVER_CURRENT_TARGET,FS_FAILOVER_THRESHOLD,FS_FAILOVER_OBSERVER_PRESENT,FS_FAILOVER_OBSERVER_HOST from v$database;

FS_FAILOVER_STATUS FS_FAILOVER_CURRENT_TARGET FS_FAILOVER_THRESHOLD FS_FAIL FS_FAILOVER_OBSERVER_HOST
--------------------- ------------------------------ --------------------- ------- ------------------------------
SYNCHRONIZED orcl1 30 YES wrpws


Switchover in both directions:
DGMGRL for Linux: Version 10.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database
- Fast-Start Failover target

Current status for "DBTEST":
SUCCESS

DGMGRL> switchover to orcl1;
Performing switchover NOW, please wait...
Operation requires shutdown of instance "orcl" on database "orcl"
Shutting down instance "orcl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "orcl" on database "orcl1"
Shutting down instance "orcl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "orcl" on database "orcl"
Starting instance "orcl"...
Unable to connect to database
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Failed.
You are no longer connected to ORACLE
Please connect again.
Unable to start instance "orcl"
You must start instance "orcl" manually
Operation requires startup of instance "orcl" on database "orcl1"
You must start instance "orcl" manually
Switchover succeeded, new primary is "orcl1"

I had to manually start up the databases to see if they switched well. They did reverse the roles both sides.
Checked metalink note: 308943.1 to resolve the above problem.
The listener.ora entries:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = /home/oracle/10.2)
(SID_NAME = orcl)
(SERVICE_NAME = orcl1)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl1_DGMGRL) ##--->Should be db_unique_name_DGMGRL.db_domain
(ORACLE_HOME = /home/oracle/10.2)
(SID_NAME = orcl)
(SERVICE_NAME = orcl1)
)
)

After fixing the listener issue, the error was rectified and the switchover was working seemlessly.
DGMGRL> switchover to orcl1;
Performing switchover NOW, please wait...
Operation requires shutdown of instance "orcl" on database "orcl"
Shutting down instance "orcl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "orcl" on database "orcl1"
Shutting down instance "orcl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "orcl" on database "orcl"
Starting instance "orcl"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "orcl" on database "orcl1"
Starting instance "orcl"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "orcl1"
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
orcl - Physical standby database
- Fast-Start Failover target
orcl1 - Primary database

Current status for "DBTEST":
SUCCESS

Switchover Back to the Primary:

DGMGRL> switchover to orcl;
Performing switchover NOW, please wait...
Operation requires shutdown of instance "orcl" on database "orcl1"
Shutting down instance "orcl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "orcl" on database "orcl"
Shutting down instance "orcl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "orcl" on database "orcl1"
Starting instance "orcl"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "orcl" on database "orcl"
Starting instance "orcl"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "orcl"
DGMGRL>

This completes the testing for switchover.
Its takes about 2.5 mins to switchover.

Now the testing for Failover:

Check that there is enough flashback data available in the primary database, atleast 30-40 mins.

SQL> select (sysdate - oldest_flashback_time)*24*60 as history from v$flashback_database_log;

HISTORY
----------
486.133333

Lets create a table and put some data to see if there is any data loss in the failover.

SQL> create table x as select * from all_objects;

Table created.

SQL> select count(*) from x;

COUNT(*)
----------
9915

SQL> shutdown abort;
ORACLE instance shut down.
SQL> exit

Observer Log:

[W000 09/17 09:53:56.79] Failed to start the Observer.
Error: ORA-16647: could not start more than one observer

Failed.

[W000 09/17 10:02:22.38] Failed to start the Observer.
Observer started
[W000 09/17 10:02:53.82] Observer started.
Observer stopped
Observer started
[W000 09/17 10:15:42.88] Observer started.

15:14:04.26 Friday, September 18, 2009
Initiating fast-start failover to database "orcl1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "orcl1"
15:14:13.79 Friday, September 18, 2009

Status from dgmgrl on the new primary:
CentOS4.4-64:wrpws:oracle > dgmgrl sys/oracle
DGMGRL for Linux: Version 10.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
orcl - Physical standby database (disabled)
- Fast-Start Failover target
orcl1 - Primary database

Current status for "DBTEST":
Warning: ORA-16608: one or more databases have warnings

Record count from the new primary:
CentOS4.4-64:wrpws:oracle > sqlplus sys as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Sep 18 15:17:32 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select count(*) from x;

COUNT(*)
----------
9915

Now to reinstate the old primary as standby:
I started the old primary in mount mode, after about 30 sec I see activity in the observer.log
15:25:27.74 Friday, September 18, 2009
Initiating reinstatement for database "orcl"...
Reinstating database "orcl", please wait...
Operation requires shutdown of instance "orcl" on database "orcl"
Shutting down instance "orcl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "orcl" on database "orcl"
Starting instance "orcl"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "orcl" ...
Reinstatement of database "orcl" succeeded
15:26:27.32 Friday, September 18, 2009

When you do show configuration in dgmgrl you will get the following error:
Warning: ORA-16817: unsynchronized Fast-Start Failover configuration

Just switch a couple fo logfile (alter system switch logfile;) in sqlplus and then check the output of dgmgrl, it should be fine:
CentOS4.4-64:wrpws:oracle > dgmgrl sys/oracle
DGMGRL for Linux: Version 10.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
orcl - Physical standby database
- Fast-Start Failover target
orcl1 - Primary database

Current status for "DBTEST":
SUCCESS

DGMGRL>

Well this completes my testing and implementation of Fast-Start Failover!!!
I hope this helps.

How To: Configure Data Guard Broker

Configuring Data Guard Broker:
NOTE: You should have your physical standby already setup.
How to Setup Data Guard

1.) Check parameter DG_BROKER_START on primary:
SQL> show parameter DG_BROKER_START

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dg_broker_start boolean FALSE

2.) Set DG_BROKER_START to true on primary:
SQL> alter system set DG_BROKER_START=true scope=both;

System altered.

3.)Check DG_BROKER_START on standby side:
SQL> show parameter DG_BROKER_START

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dg_broker_start boolean FALSE

4.) Set DG_BROKER_START to true on standby:
SQL> alter system set DG_BROKER_START=true scope=both;

System altered.

5.) Define a service for the listener that includes the db_unique_name_DGMGRL.db_domain values for the GLOBAL_DBNAME:

Update the listener.ora file on primary and standby
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = orcl.abc.com)
(ORACLE_HOME = /opt/app/oracle/10.2)
(SID_NAME = orcl)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl_DGMGRL.abc.com) -->_DGMGRL.
(ORACLE_HOME = /opt/app/oracle/10.2)
(SID_NAME = orcl)
)
)

6.) Now to create the data guard broker configurations:

[oracle@APP3 admin]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle
Connected.
DGMGRL> create configuration 'DBTEST'
> as primary database is 'orcl'
> connect identifier is orcl;
Configuration "DBTEST" created with primary database "orcl"
DGMGRL>

DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database

Current status for "DBTEST":
DISABLED

DGMGRL>

7.) Now add standby DB to the data broker configuration:

DGMGRL> add database 'orcl1' as
> connect identifier is orcl1
> maintained as physical;
Database "orcl1" added
DGMGRL>
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database

Current status for "DBTEST":
DISABLED

8.) Now enable the configurations:
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database

Current status for "DBTEST":
Warning: ORA-16607: one or more databases have failed


DGMGRL> show database verbose orcl1

Database
Name: orcl1
Role: PHYSICAL STANDBY
Enabled: YES
Intended State: ONLINE
Instance(s):
orcl

Properties:
InitialConnectIdentifier = 'orcl1'
ObserverConnectIdentifier = ''
LogXptMode = 'ARCH'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'MANUAL'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '3'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '/opt/app/oracle/oradata/orcl, /home/oracle/oradata/orcl'
LogFileNameConvert = '/opt/app/oracle/oradata/orcl, /home/oracle/oradata/orcl, /opt/app/oracle/flash_recovery_area/orcl/onlinelog, /home/oracle/oradata/flash_recovery_area/orcl'
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'wrpws'
SidName = 'orcl'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=wrpws)(PORT=1521))'
StandbyArchiveLocation = '/home/oracle/oradata/orcl/archive'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.arc'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'

Current status for "orcl1":
Error: ORA-12545: Connect failed because target host or object does not exist


DGMGRL>

Well the problem in my case was that the hostname was not getting resolved. I added the hostname to the host file and it started working.

After this error I got error:
Current status for "orcl1":
Error: ORA-16664: unable to receive the result from a remote database

Well same problem, communication between the 2 database. Make sure that the /etc/hosts files have entries of all other servers, with name and IP address.
Always review the log_archive_dest_2 parameter, to have the right entries and service is accessible.

DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database

Current status for "DBTEST":
SUCCESS

DGMGRL>

This should setup Data Guard Broker.

COMMANDS Available in DGMGRL:

DGMGRL> help

The following commands are available:

add Add a standby database to the broker configuration
connect Connect to an Oracle instance
create Create a broker configuration
disable Disable a configuration, a database, or Fast-Start Failover
edit Edit a configuration, database, or instance
enable Enable a configuration, a database, or Fast-Start Failover
exit Exit the program
failover Change a standby database to be the primary database
help Display description and syntax for a command
quit Exit the program
reinstate Change a disabled database into a viable standby database
rem Comment to be ignored by DGMGRL
remove Remove a configuration, database, or instance
show Display information about a configuration, database, or instance
shutdown Shutdown a currently running Oracle instance
start Start Fast-Start Failover observer
startup Start an Oracle database instance
stop Stop Fast-Start Failover observer
switchover Switch roles between the primary database and a standby database

Use "help " to see syntax for individual commands

Tuesday, September 15, 2009

Step by Step Data Guard Setup for Oracle 10g

Steps:
1.) Make sure archive log mode is enabled on your database:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /opt/app/oracle/oradata/orcl/archive
Oldest online log sequence 108
Next log sequence to archive 109
Current log sequence 109

SQL> select name, log_mode from v$database;

NAME LOG_MODE
--------- ------------
ORCL ARCHIVELOG

If archive log mode is not enabled. Please enable it using the following link.
How to enable archivelog mode in Oracle 11g database

2.) Enable force logging on the database, so that there is no problems with no logging operations in the future.

SQL> alter database force logging;

Database altered.

3.) Create password file, if you do not have one already.


[oracle@APP3 dbs]$ cd $ORACLE_HOME/dbs
[oracle@APP3 dbs]$ orapwd file=orapworcl password=oracle force=y
[oracle@APP3 dbs]$ ls -lrt orapworcl
-rw-r----- 1 oracle oinstall 1536 Sep 14 08:21 orapworcl

SQL> select * from v$pwfile_users;

USERNAME SYSDB SYSOP
------------------------------ ----- -----
SYS TRUE TRUE


4.) Create Standby Redo Logfiles on primary DB.

Current logfile:
SQL> col member a40
SQL> select a.group#,a.status,a.member,b.bytes/1024/1024 from v$logfile a,v$log b
2 where a.group#=b.group#;

GROUP# STATUS MEMBER B.BYTES/1024/1024
---------- ------- ---------------------------------------- -----------------
1 /opt/app/oracle/oradata/orcl/redo01.log 50
2 /opt/app/oracle/oradata/orcl/redo02.log 50

Add standby redo log groups:

SQL> alter database add standby logfile group 3 size 50M;

Database altered.

SQL> alter database add standby logfile group 4 size 50M;

Database altered.

SQL> select * from v$logfile;

GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- --------------------------------------------------------------------------- ---
1 ONLINE /opt/app/oracle/oradata/ORCL/redo01.log NO
2 ONLINE /opt/app/oracle/oradata/ORCL/redo02.log NO
3 STANDBY /opt/app/oracle/flash_recovery_area/ORCL/onlinelog/o1_mf_3_5bvzkzgs_.log YES
4 STANDBY /opt/app/oracle/flash_recovery_area/ORCL/onlinelog/o1_mf_4_5bvzl8hf_.log YES

SQL> select * from v$standby_log;

GROUP# DBID THREAD# SEQUENCE# BYTES USED ARC STATUS FIRST_CHANGE# FIRST_TIM LAST_CHANGE# LAST_TIME
----------------------------------------------------------------------------------------------------------------------
3 UNASSIGNED 0 0 52428800 512 YES UNASSIGNED 0 0
4 UNASSIGNED 0 0 52428800 512 YES UNASSIGNED 0 0

5.) Check parameter db_unique_name
SQL> show parameters unique

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string orcl

6.) Add standby related entried to Primary database:
SQL> create pfile='/home/oracle/initprim.ora' from spfile;

Sample init.ora from Primary:
orcl.__db_cache_size=2097152000
orcl.__java_pool_size=16777216
orcl.__large_pool_size=16777216
orcl.__shared_pool_size=536870912
orcl.__streams_pool_size=0
*.audit_file_dest='/opt/app/oracle/admin/orcl/adump'
*.background_dump_dest='/opt/app/oracle/admin/orcl/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/opt/app/oracle/oradata/orcl/control01.ctl','/opt/app/oracle/oradata/orcl/control02.ctl','/opt/app/oracle/oradata/orcl/control03.ctl'
*.core_dump_dest='/opt/app/oracle/admin/orcl/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='/opt/app/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='orcl'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='location=/opt/app/oracle/oradata/orcl/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl'
*.open_cursors=300
*.pga_aggregate_target=823132160
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=2684354560
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/opt/app/oracle/admin/orcl/udump'
db_unique_name=orcl
LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,orcl1)'
*.LOG_ARCHIVE_DEST_2='SERVICE=ORCL1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl1'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=3
DB_FILE_NAME_CONVERT='/u01/oradata/orcl','/opt/app/oracle/oradata/orcl'
LOG_FILE_NAME_CONVERT='/u01/oradata/orcl/archive',/opt/app/oracle/oradata/orcl/archive','/u01/oradata/flash_recovery_area/orcl','/opt/app/oracle/flash_recovery_area/orcl/onlinelog'
FAL_SERVER=orcl1
FAL_CLIENT=orcl

Copy the init.ora and make necessary changes to the file to be used at standby side. Changes like location of various files, FAL_SERVER, FAL_CLIENT etc.

Sample init.ora in Standby DB:
orcl.__db_cache_size=2097152000
orcl.__java_pool_size=16777216
orcl.__large_pool_size=16777216
orcl.__shared_pool_size=536870912
orcl.__streams_pool_size=0
*.audit_file_dest='/u01/oradata/orcl/adump'
*.background_dump_dest='/u01/oradata/orcl/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/u01/oradata/orcl/control01.ctl','/u01/oradata/orcl/control02.ctl','/u01/oradata/orcl/control03.ctl'
*.core_dump_dest='/u01/oradata/orcl/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='/u01/oradata/flash_recovery_area/orcl'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='orcl1'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='location=/u01/oradata/orcl/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl'
*.open_cursors=300
*.pga_aggregate_target=823132160
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=2684354560
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/oradata/orcl/udump'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,orcl1)'
*.LOG_ARCHIVE_DEST_2='SERVICE=ORCL LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=3
DB_FILE_NAME_CONVERT='/opt/app/oracle/oradata/orcl','/u01/oradata/orcl'
LOG_FILE_NAME_CONVERT='/opt/app/oracle/oradata/orcl/archive','/u01/oradata/orcl/archive','/opt/app/oracle/flash_recovery_area/orcl/onlinelog','/u01/oradata/flash_recovery_area/orcl'
FAL_SERVER=orcl
FAL_CLIENT=orcl1

7.) Shutdown the primary database. Use the newly created pfile to startup nomount the database. Then create a spfile for the database. Mount the database and create a standby controlfile.

Shutdown the database and take a cold back of the database, all files including the redo log files. You can also create a standby DB from hot backup.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup nomount pfile='/home/oracle/pfileorcl.ora'
ORACLE instance started.

Total System Global Area 2684354560 bytes
Fixed Size 2086352 bytes
Variable Size 570427952 bytes
Database Buffers 2097152000 bytes
Redo Buffers 14688256 bytes
SQL> create spfile from pfile='/home/oracle/pfileorcl.ora';

File created.
Meanwhile I also received the error:
create spfile from pfile='/home/oracle/pfileregdb.ora'
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kspsetpao1], [1753], [1700], [*],
[user_dump_dest], [33], [], []

Note: This error usually comes when the syntax of the pfile is wrong somewhere, please fix the pfile and try again. It worked for me.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 2684354560 bytes
Fixed Size 2086352 bytes
Variable Size 570427952 bytes
Database Buffers 2097152000 bytes
Redo Buffers 14688256 bytes
Database mounted.
Database opened.

8.) Shutdown the database again and take a cold backup of all files.

9.) Create standby control file.
SQL> startup mount
ORACLE instance started.
Total System Global Area 2684354560 bytes
Fixed Size 2086352 bytes
Variable Size 570427952 bytes
Database Buffers 2097152000 bytes
Redo Buffers 14688256 bytes
Database mounted.

--Then mount and create a standby controlfile.
SQL> alter database create standby controlfile as 'standby.ctl';
Database altered.

-- Open the primary read write.
SQL> alter database open;
Database altered.

10.) Transfer all the file from the cold backup from Primary to Standby server. Also copy the password file from primary to standby.

Also copy the standby controlfile created in step 9 and copy if with the right name and location on standby server.
I use SFTP for transferring the files.

11.) Add entries for the primary db and standby DB in both primary and standby servers. i.e. primary server should have its own (orcl) and standby server (orcl1) tns entry.

12.) Copy the pfile from step 6 for standby DB.
Now try to nomount the standby database with the new pfile.

[oracle@dbtest dbs]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 15 04:57:32 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/home/oracle/oracle/product/10.2.0/db_1/dbs/pfilestbregdb.ora';
ORACLE instance started.

Total System Global Area 1694498816 bytes
Fixed Size 1219784 bytes
Variable Size 402654008 bytes
Database Buffers 1275068416 bytes
Redo Buffers 15556608 bytes

13.) Create spfile from pfile.
SQL> create spfile from pfile='/home/oracle/oracle/product/10.2.0/db_1/dbs/pfilestbregdb.ora';

File created.

14.) Shutdown the DB and do a startup mount.
SQL>startup mount;

15.) Start REDO apply process:
SQL> alter database recover managed standby database disconnect from session;
OR
SQL> alter database recover managed standby database nodelay disconnect parallel 8;

16.) Verification.
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

To check archive gap:
SELECT thread#, low_sequence#, high_sequence# from V$archive_gap;
for RAC
SELECT thread#, low_sequence#, high_sequence# from gv$archive_gap;

To stop redo apply:
alter database recover managed standby database cancel;

17.) Check alert log files and verify that you did not receive any error.

18.) Switch some logfiles on the Primary and check if the same are getting applied to the standby.
on Primary:
SQL> alter system switch logfile;

on standby:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

Thanks should be it, your Physical Standby DB should be working fine.

Wednesday, August 12, 2009

Country ISO Code with Calling Code Data

Recently was required to populate Country Name, Country ISO Code and Country Calling Code data in a table in the database.
Compiled data from various sources and finally came to the below data. Please use it for reference, this data is correct to best of my knowledge.
Incase you find that there is any problem please let me know.
Incase you need insert statement for database, please ask for them I will send them to you.




I hope this is useful!!

Tuesday, August 4, 2009

Insert/Update XML data stored in CLOB Column

Recently I had a requirement to insert/update XML data stored directly in CLOB column of a table by the application.
There was a requirement to update the XML node values directly by-passing the application. I had no experience with using XML function provided by oracle, plus the column is a normal CLOB column, as part of the many columns the table has. Let me give you an example for such a table:

Columns Name Data Type
--------------------------------------------------------------------------------
ID Number
First_Name VARCHAR2(128)
Last_Name VARCHAR2(128)
DETAILS CLOB ---> This column stores the XML information.

The process that we need to follow:

Convert the CLOB column to XMLTYPE using XMLTYPE.createXML(column_name)
e.g. XMLTYPE.createXML(details)
The important thing is to know the XML pattern in the column. eg.


<details>

<profile>

<websiteurl><![CDATA[www.yahoo.com]]></websiteurl>

<companyname><![CDATA[Google]]></companyname>

<jobtitle><![CDATA[DBA]]></jobtitle>

<addresses>

<address find="addressType">

<addresstype>Address1</addresstype>

<streetline1><![CDATA[123 XYZ 1111]]></streetline1>

</address>

<address find="addressType">

<addresstype>Address2</addresstype>

<streetline1><![CDATA[890 abc 2222]]></streetline1>

</address>

</addresses>

<phonenumbers>

<phonenumber find="phoneType">

<phonetype>Phone1</phonetype>

<number><![CDATA[+1-111-111-1111]]></number>

</phonenumber>

<phonenumber find="phoneType">

<phonetype>Fax1</phonetype>

<number><![CDATA[222-222-2222]]></number>

</phonenumber>

<phonenumber find="phoneType">

<phonetype>Mobile1</phonetype>

<number><![CDATA[+1-333-333-3333]]></number>

</phonenumber>

</phonenumbers>

<emails>

<email find="emailType">

<emailtype>Email1</emailtype>

<email><![CDATA[hero@abc.com]]></email>

</email>

<email find="emailType">

<emailtype>Email2</emailtype>

<email><![CDATA[hero1@abc.com]]></email>

</email>

</emails>

</profile>

</details>




Now queries to SELECT data from the XML column:

SQL> SELECT EXTRACTVALUE(XMLTYPE.createXML(details),'/details/profile/companyName/text()') XMLSrc FROM usera WHERE id='1';
XMLSRC
--------------------------------------------------------------------------------
Google
The above query will give you the extact value of the company.

SQL> SELECT EXTRACT(XMLTYPE.createXML(details),'/details/profile/companyName/text()') XMLSrc FROM usera WHERE id='1';
XMLSRC
--------------------------------------------------------------------------------

The above query returns the complete of the node.

SQL> SELECT EXTRACTVALUE(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']/streetLine1/text()') XMLSrc FROM usera WHERE id='1';
XMLSRC
--------------------------------------------------------------------------------
123 XYZ 1111

The above example shows how to query data inside a nested node. For me it was easy to query any node directly, but I had to really dig in to find out how to query a node which is for a particular type of a parent node.

Now to Update some data:
SQL> UPDATE usera set details = UPDATEXML(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']/streetLine1/text()','The World is Great').getClobVal() WHERE id=1;

1 row updated.

SQL> SELECT EXTRACTVALUE(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']/streetLine1/text()') XMLSrc FROM usera WHERE id='1';
XMLSRC
--------------------------------------------------------------------------------
The World is Great

If you look at the above update, the details column was first converted into XMLTYPE and then the node updated and then returned back as CLOB using getClobVal() function as the column is CLOB type not XMLTYPE. If you do not use the getClobVal() function you would get the following error:

SQL> UPDATE usera set details = UPDATEXML(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']/streetLine1/text()','The World is Great') WHERE id=1;
UPDATE usera set details = UPDATEXML(XMLTYPE.createXML(details),'/details/profile/addresses/address[
*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected CLOB got -

Now to add data into the XML:

SQL> UPDATE usera set details = APPENDCHILDXML(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']',XMLTYPE ('xyz garbage')).getClobVal() WHERE id=1;

1 row updated.

SQL> SELECT EXTRACTVALUE(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']/streetLine2/text()') XMLSrc FROM usera WHERE id='1';
XMLSRC
--------------------------------------------------------------------------------
xyz garbage

In the append data section, we need to provide the path and then the XML value to add.

A small test procedure to check if the value exists or not and then append.
declare
var1 clob;
begin
SELECT EXTRACTVALUE(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']/streetLine2/text()') into var1 FROM usera WHERE id=1;
IF var1 is NULL then
UPDATE usera set details = APPENDCHILDXML(XMLTYPE.createXML(details),'/details/profile/addresses/address[addressType=''Address1'']',XMLTYPE ('xyz garbage')).getClobVal() WHERE id=1;
END IF;
end;

To look for more XML functions please refer to page Oracle XML Functions

Monday, July 20, 2009

How To: Setup up of Oracle Streams Replication

Note: I have started using TOAD now, so you wont see the statement complication output.

Step 1: Create stream administration user in both databases.

create user streamadmin identified by streamadmin default tablespace users;

Step 2: Required grants to the user streamadmin.

grant dba,select_catalog_role to streamadmin;
exec dbms_streams_auth.grant_admin_privilege('streamadmin',true);

Step 3: We will use default SCOTT schema for seting up this replication.

Step 4: Check database paramters required for setting up stream replication

For our example:
DB 1:
Name: TEST1
Global Name should be true
SQL> show parameter global_name
NAME TYPE VALUE
------------------------------------ ----------- --------------------------
global_names boolean TRUE

SQL> select * from global_name;
GLOBAL_NAME
-----------------------------------------------
TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM

SQL> show parameter job_queue

NAME TYPE VALUE
------------------------------------ ----------- ------
job_queue_processes integer 10

SQL> show parameter db_recovery_file_dest_size
NAME TYPE VALUE
------------------------------------ ----------- -----------
db_recovery_file_dest_size big integer 2G

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- -------------------------------------
db_recovery_file_dest string D:\oracle\product/flash_recovery_area

DB 2:
Name: TEST
Global Name should be true.
SQL> show parameter global_name
NAME TYPE VALUE
------------------------------------ ----------- -------
global_names boolean TRUE

SQL> select * from global_name;
GLOBAL_NAME
--------------------------------------
TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM

SQL> show parameter job_queue
NAME TYPE VALUE
------------------------------------ ----------- ------
job_queue_processes integer 10

SQL> show parameter db_recovery_file_dest_size

NAME TYPE VALUE
------------------------------------ ----------- --------
db_recovery_file_dest_size big integer 2G

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string D:\oracle\product\10.2.0/flash_recovery_area

Step 5: Enable supplemental logging on the tables of the scott user in both the databases:

Oracle has two types of supplimental logging options:
1.) Unconditional Supplemental Logging: Logs the before images of specified columns any time the table is updated, regardless of whether the update affected any of the specified columns. This is sometimes referred to as an ALWAYS log group.
2.) Conditional Supplemental Log Groups: Logs the before images of all specified columns only if at least one of the columns in the log group is updated.
(From Oracle Documentation)

Login to the SCOTT schema to start conditional supplemental logging:
SQL> alter table emp add supplemental log group supp_log_emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO);
Table altered.
SQL> alter table dept add supplemental log group supp_log_dept (deptno,dname,loc);
Table altered.
SQL> alter table bonus add supplemental log group supp_log_bonus (ename,job,sal,comm);
Table altered.
SQL> alter table salgrade add supplemental log group supp_log_sal (grade,losal,hisal);
Table altered.

The same needs to be done on the other database also.

SQL> select log_group_name, table_name from dba_log_groups where owner='SCOTT';
LOG_GROUP_NAME TABLE_NAME
------------------------------ ------------------------------
SUPP_LOG_EMP EMP
SUPP_LOG_DEPT DEPT
SUPP_LOG_BONUS BONUS
SUPP_LOG_SAL SALGRADE

Step 6: Create Database Links between the stream administrator users in the 2 databases.
SQL> CREATE DATABASE LINK TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'TEST_REP2';
Database link created.

SQL> CREATE DATABASE LINK TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'TEST_REP1';
Database link created.

Step 7: Create Stream Queues under the streamadmin user to apply and capture the database changes to be replicated.
This also needs to run on both the databases as streamadmin.

SQL> show user
USER is "STREAMADMIN"
SQL> exec dbms_streams_adm.set_up_queue(queue_table => 'apply_q', queue_name => 'apply_q', queue_user => 'streamadmin');
PL/SQL procedure successfully completed.

SQL> exec dbms_streams_adm.set_up_queue(queue_table => 'capture_q',queue_name => 'capture_q',queue_user => 'streamadmin');
PL/SQL procedure successfully completed.

Step 8: Setup data capture on both the databases:
Procedure dbms_streams_adm is the administration package for setting up capture process when using streams, like dbms_repcat when using normal replication.
http://www.psoug.org/reference/dbms_streams_adm.html for details reference of this package.

Views sys.streams$_process_params, sys.streams$_capture_process

SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_TYPE => 'CAPTURE',
STREAMS_NAME => 'CAPTURE_STREAM',
QUEUE_NAME => 'CAPTURE_Q',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
INCLUSION_RULE => TRUE);
END;
/
PL/SQL procedure successfully completed.

The above procedure need to run on both the databases.

Step 9: Setup data apply on both the databases:
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_TYPE => 'APPLY',
STREAMS_NAME => 'APPLY_STREAM',
QUEUE_NAME => 'APPLY_Q',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_TYPE => 'APPLY',
STREAMS_NAME => 'APPLY_STREAM',
QUEUE_NAME => 'APPLY_Q',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

Step 10: Setup propogation process on both the databases:
Its basically setting up related between the capture process on one database and apply process on the other database.
Thes need to run as streamadmin user.
DB 1:

SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_NAME => 'TEST1_TO_TEST',
SOURCE_QUEUE_NAME => 'CAPTURE_Q',
DESTINATION_QUEUE_NAME => 'APPLY_Q@TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

DB 2:
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_NAME => 'TEST_TO_TEST1',
SOURCE_QUEUE_NAME => 'CAPTURE_Q',
DESTINATION_QUEUE_NAME => 'APPLY_Q@TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

Step 11: Setup schema instantiation SCN on DB 2 (TEST) & DB 1 (TEST1):

SQL> DECLARE
ISCN NUMBER;
BEGIN
ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM (
SOURCE_SCHEMA_NAME => 'SCOTT',
SOURCE_DATABASE_NAME => 'TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INSTANTIATION_SCN => ISCN,
RECURSIVE => TRUE);
END;
/
PL/SQL procedure successfully completed.

DECLARE
ISCN NUMBER;
BEGIN
ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM (
SOURCE_SCHEMA_NAME => 'SCOTT',
SOURCE_DATABASE_NAME => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INSTANTIATION_SCN => ISCN,
RECURSIVE => TRUE);
END;
/
PL/SQL procedure successfully completed.

Step 12: Start capture and apply process:
Setting the disable_on_error parameter to ‘N’ allows the apply process to continue applying row LCRs even when it encounters errors. The default value is ‘Y’ which disables the apply process automatically on the first error encountered.
DB 1 (TEST1):

SQL> EXEC DBMS_APPLY_ADM.SET_PARAMETER (APPLY_NAME => 'APPLY_STREAM', PARAMETER => 'DISABLE_ON_ERROR', VALUE => 'N');
PL/SQL procedure successfully completed.

SQL> EXEC DBMS_APPLY_ADM.START_APPLY (APPLY_NAME => 'APPLY_STREAM');
PL/SQL procedure successfully completed.

SQL> EXEC DBMS_CAPTURE_ADM.START_CAPTURE (CAPTURE_NAME => 'CAPTURE_STREAM');
PL/SQL procedure successfully completed.

Same steps for the other DB also.

Steps 13: Test the setup. I just realized that its "Step 13" for testing, not good not good, not number 13.
Actually I faced a few problems with the test.
1.) The AQ_TM_PROCESSES parameter in my case was 0, but the data was not getting transferred.
2.) I had not set the instantiation SCN for the second DB.

I found the above errors from the below query:
SELECT APPLY_NAME,
SOURCE_DATABASE,
LOCAL_TRANSACTION_ID,
ERROR_NUMBER,
ERROR_MESSAGE,
MESSAGE_COUNT
FROM DBA_APPLY_ERROR;

After rectifying the error, to apply the failed transaction:
BEGIN
DBMS_APPLY_ADM.EXECUTE_ERROR(
local_transaction_id => '4.30.434',
execute_as_user => false);
END;
You ge the local_transaction_id from the above query.

Anyway lets test the setup once again:
Insert test:
------------
SQL> connect scott/tiger@test_rep1
Connected.
SQL> insert into dept values (50,'RESEARCH','MEMPHIS');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from dept@TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM;

DEPTNO DNAME LOC
---------- -------------- -------------
50 RESEARCH MEMPHIS
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

Update Test:
------------
SQL> connect scott/tiger@test_rep2
Connected.
SQL> update dept set loc='DELHI' where deptno=50;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from dept@TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM;

DEPTNO DNAME LOC
---------- -------------- -------------
50 RESEARCH DELHI
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

Finally u can see that basic steps of replication is working.

How TO: Check for Charater Set Conversion Problems & Fix Them

Character set conversion was done without using the csscan to verify if its possible.
ORA-06552: PL/SQL: Compilation unit analysis terminated
ORA-06553: PLS-553: character set name is not recognized

Character set was converted from WE8ISO8859P1 to AL32UTF8.
Helpful metalink note: 286964.1

Below query is used to see if there is a mismatch in character set in the database.
SQL> select distinct(nls_charset_name(charsetid)) CHARACTERSET,
decode(type#, 1, decode(charsetform, 1, 'VARCHAR2', 2, 'NVARCHAR2','UNKOWN'),
9, decode(charsetform, 1, 'VARCHAR', 2, 'NCHAR VARYING', 'UNKOWN'),
96, decode(charsetform, 1, 'CHAR', 2, 'NCHAR', 'UNKOWN'),
112, decode(charsetform, 1, 'CLOB', 2, 'NCLOB', 'UNKOWN')) TYPES_USED_IN
from sys.col$ where charsetform in (1,2) and type# in (1, 9, 96, 112) order by CHARACTERSET;

CHARACTERSET TYPES_USED_IN
---------------------------------------- -------------
AL16UTF16 NCHAR
AL16UTF16 NCLOB
AL16UTF16 NVARCHAR2
AL32UTF8 CLOB
AL32UTF8 VARCHAR2
WE8ISO8859P1 CHAR
WE8ISO8859P1 CLOB
WE8ISO8859P1 VARCHAR2

So we can see that the above query also returns the old character set along with the new character set. It should ideally not show any record of the old character.
Here you can see that there are 2 different character sets returned for VARCHAR2, CLOB & CHAR data, which indicates a mixup in the database character set setup, which is the cause for this PLS-553 error.

I had to work on this problem as in a new development database, someone accidently changed the charaterset and then we started getting all sorts of issues. Now lets find out which tables & columns are having problems:
For CHAR,VARCHAR2 and CLOB columns: I have mentioned the old or problem character set in the below query, you are replace it with the one you are facing problem with.
SQL> set lines 199
SQL> col owner format a10
SQL> col table_name format a20
SQL> col column_name format a20
SQL> select OWNER, TABLE_NAME, COLUMN_NAME from DBA_TAB_COLS where COLUMN_ID in (select unique COL#
from sys.col$ where CHARSETID=NLS_CHARSET_ID('WE8ISO8859P1') and charsetform ='1')
order by OWNER, TABLE_NAME, COLUMN_NAME; 2 3

OWNER TABLE_NAME COLUMN_NAME
---------- -------------------- --------------------
TEST TEST_TABLE1 CODE
TEST TEST_TABLE1 CREATETIME
TEST TEST_TABLE1 LASTMODIFIEDTIME
TEST TEST_TABLE1 TYPEID
TEST TEST_TABLE2 ACTIONTYPE
.
.
.
49692 rows selected.

In my case almost all schemas in the database are affected including sys and system.

For Nchar, Nvarchar2 and Nclob:
SQL> set lines 199
SQL> col owner format a10
SQL> col table_name format a20
SQL> col column_name format a20
SQL> select OWNER, TABLE_NAME, COLUMN_NAME from DBA_TAB_COLS where COLUMN_ID in (select unique COL#
from sys.col$ where CHARSETID=NLS_CHARSET_ID('WE8ISO8859P1') and charsetform ='2')
order by OWNER, TABLE_NAME, COLUMN_NAME;

no rows selected

Oracle note suggests that if the character set conversion has happened between a 7/8 bit character set like WE8ISO8859P1, US7ASCII etc to a mutibyte character set like UTF8, AL32UTF8 etc, then there will be data loss for clob columns which display the old character set. So it is best to take a full back of the database, preferably using the tradional export utility.

Check the following things:
SQL> sho parameter CLUSTER_DATABASE
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cluster_database boolean FALSE
cluster_database_instances integer 1
SQL> sho parameter PARALLEL_SERVER
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_server boolean FALSE
parallel_server_instances integer 1

The parameters CLUSTER_DATABASE & parallel_server should be false.

Save the values for;
SQL> show parameter JOB_QUEUE_PROCESSES
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes integer 10
SQL> show parameter AQ_TM_PROCESSES
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes integer 2

To start the fix:
Shutdown listener, make sure there are not connections to the database.
SQL> shutdown immediate;
SQL> STARTUP MOUNT;
ORACLE instance started.

Total System Global Area 926941184 bytes
Fixed Size 1222648 bytes
Variable Size 276826120 bytes
Database Buffers 641728512 bytes
Redo Buffers 7163904 bytes
Database mounted.
SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION;
System altered.

SQL> ALTER DATABASE OPEN;
Database altered.
SQL> COL VALUE NEW_VALUE CHARSET
SQL> SELECT VALUE FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER='NLS_CHARACTERSET';
VALUE
--------------------------------------------------------------------------------
AL32UTF8

SQL> COL VALUE NEW_VALUE NCHARSET
SQL> SELECT VALUE FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER='NLS_NCHAR_CHARACTERSET';
VALUE
--------------------------------------------------------------------------------
AL16UTF16

SQL> ALTER DATABASE CHARACTER SET INTERNAL_USE &CHARSET;
old 1: ALTER DATABASE CHARACTER SET INTERNAL_USE &CHARSET
new 1: ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8
Database altered.
SQL> ALTER DATABASE NATIONAL CHARACTER SET INTERNAL_USE &NCHARSET;
old 1: ALTER DATABASE NATIONAL CHARACTER SET INTERNAL_USE &NCHARSET
new 1: ALTER DATABASE NATIONAL CHARACTER SET INTERNAL_USE AL16UTF16
Database altered.
SQL> SHUTDOWN IMMEDIATE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> STARTUP;
ORACLE instance started.
Total System Global Area 926941184 bytes
Fixed Size 1222648 bytes
Variable Size 276826120 bytes
Database Buffers 641728512 bytes
Redo Buffers 7163904 bytes
Database mounted.
Database opened.
SQL> SHUTDOWN IMMEDIATE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 926941184 bytes
Fixed Size 1222648 bytes
Variable Size 276826120 bytes
Database Buffers 641728512 bytes
Redo Buffers 7163904 bytes
Database mounted.
Database opened.

Need to restart the database twice.

Check the database to see if there are any problems:
SQL> SQL> SQL> select distinct(nls_charset_name(charsetid)) CHARACTERSET,
decode(type#, 1, decode(charsetform, 1, 'VARCHAR2', 2, 'NVARCHAR2','UNKOWN'),
9, decode(charsetform, 1, 'VARCHAR', 2, 'NCHAR VARYING', 'UNKOWN'),
96, decode(charsetform, 1, 'CHAR', 2, 'NCHAR', 'UNKOWN'),
112, decode(charsetform, 1, 'CLOB', 2, 'NCLOB', 'UNKOWN')) TYPES_USED_IN
from sys.col$ where charsetform in (1,2) and type# in (1, 9, 96, 112);

CHARACTERSET TYPES_USED_IN
---------------------------------------- -------------
AL32UTF8 VARCHAR2
AL16UTF16 NCLOB
AL16UTF16 NVARCHAR2
AL32UTF8 CHAR
AL32UTF8 CLOB
AL16UTF16 NCHAR

6 rows selected.

SQL> select OWNER, TABLE_NAME, COLUMN_NAME from DBA_TAB_COLS where COLUMN_ID in (select unique COL#
from sys.col$ where CHARSETID=NLS_CHARSET_ID('WE8ISO8859P1') and charsetform ='1')
order by OWNER, TABLE_NAME, COLUMN_NAME;

no rows selected

SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=10;
System altered.

SQL> ALTER SYSTEM SET AQ_TM_PROCESSES=2;
System altered.

This procedure doesn't fix any data, it cannot get back the lost data due to character set changes. That will need to be restored from backup.
All it does is fixes the references the tables are having to the old character set. Hence this is just the solution to fix the data dictionary not the data, or the loss that has happened.

Thursday, May 14, 2009

Oracle Partitioning VIII - Partition Indexes

Indexing on Partitoned Tables:
Local Indexes:
A local index on a partition table will have the same partition schema as the table. Each partition in a local index will have similar entries like the corresponding data in the table partition.
eg.:
Creating a table first:
SQL> CREATE TABLE DATA_LOAD
( DATA_id NUMBER(6),
source_id NUMBER,
load_date DATE,
data_size NUMBER,
state VARCHAR2(20))
PARTITION BY RANGE (load_date)
SUBPARTITION BY LIST (state)
SUBPARTITION TEMPLATE (
SUBPARTITION NORTH_01 VALUES ('DELHI', 'JAMMU KASHMIR', 'PUNJAB', 'HARYANA') TABLESPACE DATA_01,
SUBPARTITION WEST_01 VALUES ('RAJASTHAN', 'GUJRAT', 'MAHARASTRA') TABLESPACE DATA_02,
SUBPARTITION SOUTH_01 VALUES ('TAMIL NADU', 'KERELA', 'KARNATAKA') TABLESPACE DATA_03,
SUBPARTITION EAST_01 VALUES ('ASSAM', 'WEST BENGAL', 'NAGALAND') TABLESPACE DATA_04,
SUBPARTITION NO_REGION VALUES (NULL) TABLESPACE USERS,
SUBPARTITION OTHER VALUES (DEFAULT) TABLESPACE USERS
)
( PARTITION data_2006 VALUES LESS THAN (TO_DATE('01-JAN-2007','DD-MON-YYYY')) ,
PARTITION data_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY')) ,
PARTITION data_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY')) ,
PARTITION data_2009 VALUES LESS THAN (TO_DATE('01-JAN-2010','DD-MON-YYYY'))
);
Table created.

SQL> create index DATA_LOAD_LOCAL1 on DATA_LOAD (load_date) LOCAL;
Index created.

SQL> insert into DATA_LOAD values (1,1,sysdate-1200,10,'DELHI');
1 row created.
SQL> insert into DATA_LOAD values (2,2,sysdate-1100,20,'RAJASTHAN');
1 row created.
SQL> insert into DATA_LOAD values (3,3,sysdate-1000,30,'KERELA');
1 row created.
SQL> insert into DATA_LOAD values (4,4,sysdate-1300,40,'ASSAM');
1 row created.
SQL> insert into DATA_LOAD values (5,5,sysdate-900,50,'NAGALAND');
1 row created.
SQL> insert into DATA_LOAD values (6,6,sysdate-300,60,'PUNJAB');
1 row created.
SQL> insert into DATA_LOAD values (7,7,sysdate,70,'HARYANA');
1 row created.
SQL> commit;
Commit complete.
SQL> analyze table DATA_LOAD compute statistics;
Table analyzed.

SQL> analyze index DATA_LOAD_LOCAL1 compute statistics ;
Index analyzed.

SQL> select INDEX_NAME,COMPOSITE,PARTITION_NAME,SUBPARTITION_COUNT,NUM_ROWS from user_ind_partitions where index_name='DATA_LOAD_LOCAL1';

INDEX_NAME COM PARTITION_NAME SUBPARTITION_COUNT NUM_ROWS
------------------------------ --- --------------- ------------------ ----------
DATA_LOAD_LOCAL1 YES DATA_2006 6 5
DATA_LOAD_LOCAL1 YES DATA_2007 6 0
DATA_LOAD_LOCAL1 YES DATA_2008 6 1
DATA_LOAD_LOCAL1 YES DATA_2009 6 1

SO we can see the index partitions were created the same way as the table partition, same subpartitions also.

Prefixed LOCAL Index:
A prefixed local index is an index in which the left most column used in the index definition is the same as the table partitioning key. The above example is a local prefixed index.

LOCAL NON-Prefixed Index:
As the name suggests its opposite to the prefixed local index. In this index the left most key is not the partition key column.

SQL> create index DATA_LOAD_LOCAL2 on DATA_LOAD (source_id) LOCAL;
Index created.

SQL> analyze index DATA_LOAD_LOCAL2 compute statistics;
Index analyzed.

SQL> select INDEX_NAME,COMPOSITE,PARTITION_NAME,SUBPARTITION_COUNT,NUM_ROWS from user_ind_partitions where index_name='DATA_LOAD_LOCAL2';
INDEX_NAME COM PARTITION_NAME SUBPARTITION_COUNT NUM_ROWS
------------------------------ --- --------------- ------------------ ----------
DATA_LOAD_LOCAL2 YES DATA_2006 6 5
DATA_LOAD_LOCAL2 YES DATA_2007 6 0
DATA_LOAD_LOCAL2 YES DATA_2008 6 1
DATA_LOAD_LOCAL2 YES DATA_2009 6 1

We can see that even in the NON-Prefixed index has the same partitioning scheme as the partition table.
Each partition of the NON-Prefixed Local index will contain the whole range of possible partition key values, so this might we inefficient at time if we are not using parallel option while selecting. As here the partitions for the index are created like the that of the table using the partition key, but the index key is something else.

GLOBAL INDEXES:
Basically Global Indexes are indexes on which we can define the parition scheme, we define how many partitions we want of the index using which ever table column we would like to use. A global index can be created with the table partition key as the leftmost column or with any other column and in the create statement we can define the partition values for the key.

SQL> CREATE INDEX DATA_LOAD_GLOBAL ON DATA_LOAD (load_date,data_size)
GLOBAL PARTITION BY RANGE (load_date)
(PARTITION p1 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY')),
PARTITION p2 VALUES LESS THAN (MAXVALUE)
);
Index created.

SQL> select INDEX_NAME,COMPOSITE,PARTITION_NAME,SUBPARTITION_COUNT,NUM_ROWS from user_ind_partitions where index_name='DATA_LOAD_GLOBAL';
INDEX_NAME COM PARTITION_NAME SUBPARTITION_COUNT NUM_ROWS
------------------------------ --- --------------- ------------------ ----------
DATA_LOAD_GLOBAL NO P1 0 5
DATA_LOAD_GLOBAL NO P2 0 2

We can see that the partition definition of the index is different from the table.

Actually the Global indexes are more useful when using other column for indexing the table rather than the partition key column of the table. As in our case the partitions are done on the basic on load_date, what if we use the column data_size more often to query than the load_date. In this case if we create a local index on data_size (non-prefixed local index) than its partition defination doesn't help the query as it is partitions are based on the table partitioning.
In our case:
Every partition based on load_date can have all the values of data_size. Say the data size column has values from 1 to 100.
Then DATA_2006 can have values 1 to 100, DATA_2007 can have values 1 to 100 and so on. So this does not help quering this index so much.

Instead we can create a global index on this column and define the correct partition scheme for the index using data_size as the partition key for the index.
SQL> CREATE INDEX DATA_LOAD_GLOBAL2 on DATA_LOAD (data_size)
GLOBAL PARTITION BY RANGE (data_size)
(PARTITION p1 VALUES LESS THAN (30),
PARTITION p2 VALUES LESS THAN (50),
PARTITION p3 VALUES LESS THAN (80),
PARTITION p4 VALUES LESS THAN (MAXVALUE)
);
Index created.

SQL> select INDEX_NAME,COMPOSITE,PARTITION_NAME,SUBPARTITION_COUNT,NUM_ROWS from user_ind_partitions where index_name='DATA_LOAD_GLOBAL2';
INDEX_NAME COM PARTITION_NAME SUBPARTITION_COUNT NUM_ROWS
------------------------------ --- --------------- ------------------ ----------
DATA_LOAD_GLOBAL2 NO P2 0 2
DATA_LOAD_GLOBAL2 NO P3 0 3
DATA_LOAD_GLOBAL2 NO P4 0 0
DATA_LOAD_GLOBAL2 NO P1 0 2

Note: Global indexed should have same left most index key and the index partition key, they cannot be different.
Else we will receive the below error. They are different from the non-prefixed local partition in this context.

SQL> CREATE INDEX DATA_LOAD_GLOBAL2 on DATA_LOAD (data_size)
GLOBAL PARTITION BY RANGE (source_id)
(PARTITION p1 VALUES LESS THAN (30),
PARTITION p2 VALUES LESS THAN (50),
PARTITION p3 VALUES LESS THAN (80),
PARTITION p4 VALUES LESS THAN (MAXVALUE)
); 2 3 4 5 6 7
GLOBAL PARTITION BY RANGE (source_id)
*
ERROR at line 2:
ORA-14038: GLOBAL partitioned index must be prefixed

Similarly we can have hash and list partition Global indexes:

SQL> CREATE INDEX DATA_LOAD_GLOBAL3 on DATA_LOAD (data_size) GLOBAL PARTITION BY HASH (data_size) PARTITIONS 4;
CREATE INDEX DATA_LOAD_GLOBAL3 on DATA_LOAD (data_size) GLOBAL PARTITION BY HASH (data_size) PARTITIONS 4
*
ERROR at line 1:
ORA-01408: such column list already indexed
We received the above error as we already create an index with the above column.

SQL> CREATE INDEX DATA_LOAD_GLOBAL3 on DATA_LOAD (data_size,source_id) GLOBAL PARTITION BY HASH (data_size) PARTITIONS 4;
Index created.

And so on....

Details about Oracle Partition Methods/Types

Oracle Partitioning - Exchange Partition

The need and use of exchange partitions:

I have used exchange partition to archive old data from an existing partition table. As we have a lot of very large partition tables which store data for past years, we exchange older partitions which don't need to be part of the existing table to a new table and then export it and save it on to a tape device as archive data. This way we are able to store the data out of the database and keep it on tapes for any future use. We can import this data whenever we need. We see that exchange partition is widely used in data warehousing environments to keep the tables in a manageable size, achieve adequate performance apart from data archiving. As using exchange partition we can restore this data back into the old table whenever we need it.
We can use exchange partition to convert a non partitioned table into a partitioned table also.
Syntax and example:

SQL> CREATE TABLE DATA_LOAD
( DATA_id NUMBER(6),
source_id NUMBER,
load_date DATE,
data_size NUMBER,
state VARCHAR2(20))
PARTITION BY RANGE (load_date)
( PARTITION data_2006 VALUES LESS THAN (TO_DATE('01-Jan-2007','DD-MON-YYYY')) TABLESPACE DATA_01,
PARTITION data_2007 VALUES LESS THAN (TO_DATE('01-Jan-2008','DD-MON-YYYY')) TABLESPACE DATA_02,
PARTITION data_2008 VALUES LESS THAN (TO_DATE('01-Jan-2009','DD-MON-YYYY')) TABLESPACE DATA_03,
PARTITION data_2009 VALUES LESS THAN (TO_DATE('01-Jan-2010','DD-MON-YYYY')) TABLESPACE DATA_04
);
Table created.

SQL> insert into DATA_LOAD values (1,1,sysdate-1200,1,'CA');
1 row created.
SQL> insert into DATA_LOAD values (2,2,sysdate-1100,1,'LA');
1 row created.
SQL> insert into DATA_LOAD values (3,3,sysdate-1000,1,'FA');
1 row created.
SQL> insert into DATA_LOAD values (4,4,sysdate-1300,1,'GA');
1 row created.
SQL> insert into DATA_LOAD values (5,5,sysdate-900,1,'BA');
1 row created.
SQL> insert into DATA_LOAD values (6,6,sysdate-300,1,'RA');
1 row created.
SQL> insert into DATA_LOAD values (7,7,sysdate,1,'TA');
1 row created.
SQL> commit;
Commit complete.

SQL> set lines 300
SQL> select * from DATA_LOAD;
DATA_ID SOURCE_ID LOAD_DATE DATA_SIZE STATE
---------- ---------- --------- ---------- --------------------
1 1 30-JAN-06 1 CA
2 2 10-MAY-06 1 LA
3 3 18-AUG-06 1 FA
4 4 22-OCT-05 1 GA
5 5 26-NOV-06 1 BA
6 6 18-JUL-08 1 RA
7 7 14-MAY-09 1 TA
7 rows selected.

SQL> analyze table DATA_LOAD compute statistics;
Table analyzed.

SQL> select TABLE_NAME,PARTITION_NAME,num_rows from user_tab_partitions where TABLE_NAME='DATA_LOAD';
TABLE_NAME PARTITION_NAME NUM_ROWS
--------------- --------------- ----------
DATA_LOAD DATA_2006 5
DATA_LOAD DATA_2007 0
DATA_LOAD DATA_2008 1
DATA_LOAD DATA_2009 1

Now lets try and move the DATA_2006 to a new table, lets create the table to exchange the partition with:
SQL> create table DATA_2006 as select * from DATA_LOAD where 1=2;
Table created.

SQL> select TABLE_NAME,PARTITION_NAME,num_rows from user_tab_partitions where TABLE_NAME='DATA_2006';
no rows selected

SQL> alter table DATA_LOAD EXCHANGE PARTITION DATA_2006 with table DATA_2006 without validation update global indexes;

Table altered.

SQL> select * from DATA_2006;
DATA_ID SOURCE_ID LOAD_DATE DATA_SIZE STATE
---------- ---------- --------- ---------- --------------------
1 1 30-JAN-06 1 CA
2 2 10-MAY-06 1 LA
3 3 18-AUG-06 1 FA
4 4 22-OCT-05 1 GA
5 5 26-NOV-06 1 BA

SQL> select * from DATA_LOAD;
DATA_ID SOURCE_ID LOAD_DATE DATA_SIZE STATE
---------- ---------- --------- ---------- --------------------
6 6 18-JUL-08 1 RA
7 7 14-MAY-09 1 TA

This was an example of single level partition, lets try a table with composite partitions:

SQL> drop table DATA_LOAD;
Table dropped.
SQL> drop table DATA_2006;
Table dropped.

SQL> CREATE TABLE DATA_LOAD
( DATA_id NUMBER(6),
source_id NUMBER,
load_date DATE,
data_size NUMBER,
state VARCHAR2(20))
PARTITION BY RANGE (load_date)
SUBPARTITION BY LIST (state)
SUBPARTITION TEMPLATE (
SUBPARTITION NORTH_01 VALUES ('DELHI', 'JAMMU KASHMIR', 'PUNJAB', 'HARYANA') TABLESPACE DATA_01,
SUBPARTITION WEST_01 VALUES ('RAJASTHAN', 'GUJRAT', 'MAHARASTRA') TABLESPACE DATA_02,
SUBPARTITION SOUTH_01 VALUES ('TAMIL NADU', 'KERELA', 'KARNATAKA') TABLESPACE DATA_03,
SUBPARTITION EAST_01 VALUES ('ASSAM', 'WEST BENGAL', 'NAGALAND') TABLESPACE DATA_04,
SUBPARTITION NO_REGION VALUES (NULL) TABLESPACE USERS,
SUBPARTITION OTHER VALUES (DEFAULT) TABLESPACE USERS
)
( PARTITION data_2006 VALUES LESS THAN (TO_DATE('01-JAN-2007','DD-MON-YYYY')) ,
PARTITION data_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY')) ,
PARTITION data_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY')) ,
PARTITION data_2009 VALUES LESS THAN (TO_DATE('01-JAN-2010','DD-MON-YYYY'))
);
Table created.

SQL> insert into DATA_LOAD values (1,1,sysdate-1200,1,'DELHI');
1 row created.
SQL> insert into DATA_LOAD values (2,2,sysdate-1100,1,'RAJASTHAN');
1 row created.
SQL> insert into DATA_LOAD values (3,3,sysdate-1000,1,'KERELA');
1 row created.
SQL> insert into DATA_LOAD values (4,4,sysdate-1300,1,'ASSAM');
1 row created.
SQL> insert into DATA_LOAD values (5,5,sysdate-900,1,'NAGALAND');
1 row created.
SQL> insert into DATA_LOAD values (6,6,sysdate-300,1,'PUNJAB');
1 row created.
SQL> insert into DATA_LOAD values (7,7,sysdate,1,'HARYANA');
1 row created.
SQL> commit;
Commit complete.
SQL> analyze table DATA_LOAD compute statistics;
Table analyzed.

SQL> select TABLE_NAME,PARTITION_NAME,num_rows,SUBPARTITION_COUNT from user_tab_partitions where TABLE_NAME='DATA_LOAD';
TABLE_NAME PARTITION_NAME NUM_ROWS SUBPARTITION_COUNT
--------------- --------------- ---------- ------------------
DATA_LOAD DATA_2006 5 6
DATA_LOAD DATA_2007 0 6
DATA_LOAD DATA_2008 1 6
DATA_LOAD DATA_2009 1 6

SQL> select TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,num_rows from user_tab_subpartitions where TABLE_NAME='DATA_LOAD';
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
--------------- --------------- ------------------------------ ----------
DATA_LOAD DATA_2006 DATA_2006_OTHER 0
DATA_LOAD DATA_2006 DATA_2006_NO_REGION 0
DATA_LOAD DATA_2006 DATA_2006_EAST_01 2
DATA_LOAD DATA_2006 DATA_2006_SOUTH_01 1
DATA_LOAD DATA_2006 DATA_2006_WEST_01 1
DATA_LOAD DATA_2006 DATA_2006_NORTH_01 1
DATA_LOAD DATA_2007 DATA_2007_OTHER 0
DATA_LOAD DATA_2007 DATA_2007_NO_REGION 0
DATA_LOAD DATA_2007 DATA_2007_EAST_01 0
DATA_LOAD DATA_2007 DATA_2007_SOUTH_01 0
DATA_LOAD DATA_2007 DATA_2007_WEST_01 0
DATA_LOAD DATA_2007 DATA_2007_NORTH_01 0
DATA_LOAD DATA_2008 DATA_2008_OTHER 0
DATA_LOAD DATA_2008 DATA_2008_NO_REGION 0
DATA_LOAD DATA_2008 DATA_2008_EAST_01 0
DATA_LOAD DATA_2008 DATA_2008_SOUTH_01 0
DATA_LOAD DATA_2008 DATA_2008_WEST_01 0
DATA_LOAD DATA_2008 DATA_2008_NORTH_01 1
DATA_LOAD DATA_2009 DATA_2009_OTHER 0
DATA_LOAD DATA_2009 DATA_2009_NO_REGION 0
DATA_LOAD DATA_2009 DATA_2009_EAST_01 0
DATA_LOAD DATA_2009 DATA_2009_SOUTH_01 0
DATA_LOAD DATA_2009 DATA_2009_WEST_01 0
DATA_LOAD DATA_2009 DATA_2009_NORTH_01 1
24 rows selected.

Now lets create the table to exchange the partition with:

SQL> create table DATA_2006 as select * from DATA_LOAD where 1=2;
Table created.
Now to exchange the partition DATA_2006, with the table DATA_2006:

SQL> alter table DATA_LOAD EXCHANGE PARTITION DATA_2006 with table DATA_2006 without validation update global indexes;
alter table DATA_LOAD EXCHANGE PARTITION DATA_2006 with table DATA_2006 without validation update global indexes
*
ERROR at line 1:
ORA-14291: cannot EXCHANGE a composite partition with a non-partitioned table

SQL> !oerr ora 14291
14291, 00000, "cannot EXCHANGE a composite partition with a non-partitioned table"
// *Cause: A composite partition can only be exchanged with a partitioned
// table.
// *Action: Ensure that the table being exchanged is partitioned or that
// that the partition being exchanged is non-composite.


This should that we cannot exchange partition with a normal table like we did the last time, now lets create the correct table:

SQL> drop table DATA_2006;
Table dropped.

We need to create a table with the partition scheme as the subpartition scheme of the main/big table, to be able to exchange its partition with a new table:

SQL> CREATE TABLE DATA_2006
2 ( DATA_id NUMBER(6),
3 source_id NUMBER,
4 load_date DATE,
5 data_size NUMBER,
6 state VARCHAR2(20))
7 PARTITION BY LIST (state)
8 (PARTITION NORTH_01 VALUES ('DELHI', 'JAMMU KASHMIR', 'PUNJAB', 'HARYANA') TABLESPACE DATA_01,
9 PARTITION WEST_01 VALUES ('RAJASTHAN', 'GUJRAT', 'MAHARASTRA') TABLESPACE DATA_02,
10 PARTITION SOUTH_01 VALUES ('TAMIL NADU', 'KERELA', 'KARNATAKA') TABLESPACE DATA_03,
11 PARTITION EAST_01 VALUES ('ASSAM', 'WEST BENGAL', 'NAGALAND') TABLESPACE DATA_04,
12 PARTITION NO_REGION VALUES (NULL) TABLESPACE USERS,
13 PARTITION OTHER VALUES (DEFAULT) TABLESPACE USERS
14 );
Table created.

SQL> alter table DATA_LOAD EXCHANGE PARTITION DATA_2006 with table DATA_2006 without validation update global indexes;
Table altered.

Just to check how the partitions are laid out on the new table. You can see that the subpartition data has moved to the respective partitions of the exchanged table.

SQL> select * from DATA_2006;
DATA_ID SOURCE_ID LOAD_DATE DATA_SIZE STATE
---------- ---------- --------- ---------- --------------------
1 1 30-JAN-06 1 DELHI
2 2 10-MAY-06 1 RAJASTHAN
3 3 18-AUG-06 1 KERELA
4 4 22-OCT-05 1 ASSAM
5 5 26-NOV-06 1 NAGALAND

SQL> select * from DATA_LOAD;
DATA_ID SOURCE_ID LOAD_DATE DATA_SIZE STATE
---------- ---------- --------- ---------- --------------------
6 6 18-JUL-08 1 PUNJAB
7 7 14-MAY-09 1 HARYANA

SQL> select TABLE_NAME,PARTITION_NAME,num_rows,SUBPARTITION_COUNT from user_tab_partitions where TABLE_NAME='DATA_2006';
TABLE_NAME PARTITION_NAME NUM_ROWS SUBPARTITION_COUNT
--------------- --------------- ---------- ------------------
DATA_2006 NORTH_01 1 0
DATA_2006 WEST_01 1 0
DATA_2006 SOUTH_01 1 0
DATA_2006 EAST_01 2 0
DATA_2006 NO_REGION 0 0
DATA_2006 OTHER 0 0
6 rows selected.

The main/big table will still have the partition defination but with no rows after we analyze the table. Now we can either leave this empty partition or drop the same.

SQL> select TABLE_NAME,PARTITION_NAME,num_rows,SUBPARTITION_COUNT from user_tab_partitions where TABLE_NAME='DATA_LOAD';
TABLE_NAME PARTITION_NAME NUM_ROWS SUBPARTITION_COUNT
--------------- --------------- ---------- ------------------
DATA_LOAD DATA_2006 0 6
DATA_LOAD DATA_2007 0 6
DATA_LOAD DATA_2008 1 6
DATA_LOAD DATA_2009 1 6

We can restore the data back into the main table also. We can exchange the this newly created table with the empty partition in the big/main table back, incase we ever need the data back into the main table. Hence if we archive data by exchange partition we can restore them back into the main table if we ever need it. Another thing is if we might need to restore we should not drop the empty partitions as its easier to restore the data, else we would need to do a split partition to create the partitions again.

SQL> alter table DATA_LOAD EXCHANGE PARTITION DATA_2006 with table DATA_2006 without validation update global indexes;
Table altered.

SQL> select * from DATA_2006;
no rows selected

SQL> select * from DATA_LOAD;
DATA_ID SOURCE_ID LOAD_DATE DATA_SIZE STATE
---------- ---------- --------- ---------- --------------------
1 1 30-JAN-06 1 DELHI
2 2 10-MAY-06 1 RAJASTHAN
3 3 18-AUG-06 1 KERELA
4 4 22-OCT-05 1 ASSAM
5 5 26-NOV-06 1 NAGALAND
6 6 18-JUL-08 1 PUNJAB
7 7 14-MAY-09 1 HARYANA
7 rows selected.

SQL> select TABLE_NAME,PARTITION_NAME,num_rows,SUBPARTITION_COUNT from user_tab_partitions where TABLE_NAME='DATA_LOAD';
TABLE_NAME PARTITION_NAME NUM_ROWS SUBPARTITION_COUNT
--------------- --------------- ---------- ------------------
DATA_LOAD DATA_2006 5 6
DATA_LOAD DATA_2007 0 6
DATA_LOAD DATA_2008 1 6
DATA_LOAD DATA_2009 1 6

Useful metalink notes:
Exchange Partitions - Common Problems, Doc ID: 198120.1

SCRIPT: To Automate the Composite-Partitioned Table Exchange, Doc ID: 100701.1

Exchange Partition Having Global Index with Non - Partitioned Table, Doc ID: 419505.1

Note: When we do exchange partition the local indexes become invalid, hence we would need to rebuild the same. If we provide the update global indexes, the global indexes don't become invalid and hence no need to rebuild them.