I wanted to allow a database user/schema called TEST (other than user with SYSDBA privileges) to have privileges to run dbms_metadata.get_ddl for any object in the database. Basically for other schemas in the database. Even though I provided execute privileges on the package and even DBA privilege to the user it still gave:
16:37:39 SQL> select dbms_metadata.get_ddl('PACKAGE','PACK1','SCOTT') from dual;
ERROR:
ORA-31603: object "PACK1" of type PACKAGE not found in schema "SCOTT"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.DBMS_METADATA", line 3912
ORA-06512: at "SYS.DBMS_METADATA", line 5678
ORA-06512: at line 1
The privileges required to allow a user/schema (TEST) to be able to run the package are:
grant SELECT_CATALOG_ROLE to TEST;
The most important steps is:
ALTER USER TEST DEFAULT ROLE SELECT_CATALOG_ROLE;
Just by granting SELECT_CATALOG_ROLE to the user/schema you would still get ORA-31603.
So we must alter the user and make SELECT_CATALOG_ROLE as its default role.
Thursday, December 1, 2011
dbms_metadata.get_ddl & ORA-31603
Posted by Apun Hiran at 6:24 AM 1 Comments
Labels: dbms_metadata, get_ddl, ORA-31603, oracle
Thursday, September 1, 2011
How to Move OCR and Voting Disk to ASM Diskgroup in 11gR2
Case: Moving to Redhat Linux 5/Oracle Enterprise Linux 5, which does not support Raw filesytem. Steps: (these steps can be performanced on 11gR2) 1.) Check the current location of ocr file:
[root@host1 bin]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3004 Available space (kbytes) : 259116 ID : 798462606 Device/File Name : /mnt/cludata/ocrfile Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded2.) Add a Mirror OCR location else you might encounter error :PROT-28: Cannot delete or replace the only configured Oracle Cluster Registry location
i.) Create the file (as root)
cp /dev/null /mnt/cludata/ocrmirror chown root:dba /mnt/cludata/ocrmirror chmod 640 /mnt/cludata/ocrmirror [root@host1 cludata]# pwd /rmanbkp1/cludata [root@host1 cludata]# ls -lrt total 23784 -rw-r----- 1 root dba 272756736 Aug 31 12:37 ocrfile -rw-r----- 1 root dba 0 Aug 31 13:12 ocrmirror -rw-r----- 1 oracle dba 21004288 Aug 31 13:12 cssfile
ii.) Add the mirror OCR file
[root@host1 cludata]# ocrconfig -add /rmanbkp1/cludata/ocrmirror [root@host1 cludata]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3004 Available space (kbytes) : 259116 ID : 798462606 Device/File Name : /mnt/cludata/ocrfile Device/File integrity check succeeded Device/File Name : /mnt/cludata/ocrmirror Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded3.) Check ASM Diskgroup compatibility (login as sqlplus / as sysasm)
col name format a10 col COMPATIBILITY format a20 col DATABASE_COMPATIBILITY format a20 select name,COMPATIBILITY,DATABASE_COMPATIBILITY from v$asm_diskgroup; NAME COMPATIBILITY DATABASE_COMPATIBILI ---------- -------------------- -------------------- DG_ARCH01 10.1.0.0.0 10.1.0.0.0 DG_SYS01 10.1.0.0.0 10.1.0.0.0 DG_TEMP01 10.1.0.0.0 10.1.0.0.0 DG_DATA01 10.1.0.0.0 10.1.0.0.04.) Upgrade the compatibility to 11.2 compatible.asm
SQL> alter diskgroup DG_ARCH01 set attribute 'compatible.asm'='11.2'; Diskgroup altered. SQL> alter diskgroup DG_SYS01 set attribute 'compatible.asm'='11.2'; Diskgroup altered. SQL> alter diskgroup DG_TEMP01 set attribute 'compatible.asm'='11.2'; Diskgroup altered. SQL> alter diskgroup DG_DATA01 set attribute 'compatible.asm'='11.2'; Diskgroup altered. SQL> select name,COMPATIBILITY,DATABASE_COMPATIBILITY from v$asm_diskgroup; NAME COMPATIBILITY DATABASE_COMPATIBILI ---------- -------------------- -------------------- DG_ARCH01 11.2.0.0.0 10.1.0.0.0 DG_SYS01 11.2.0.0.0 10.1.0.0.0 DG_TEMP01 11.2.0.0.0 10.1.0.0.0 DG_DATA01 11.2.0.0.0 10.1.0.0.0Update compatible.rdbms
SQL> alter diskgroup DG_ARCH01 set attribute 'compatible.rdbms'='11.2'; Diskgroup altered. SQL> alter diskgroup DG_SYS01 set attribute 'compatible.rdbms'='11.2'; Diskgroup altered. SQL> alter diskgroup DG_TEMP01 set attribute 'compatible.rdbms'='11.2'; Diskgroup altered. SQL> alter diskgroup DG_DATA01 set attribute 'compatible.rdbms'='11.2'; Diskgroup altered. SQL> select name,COMPATIBILITY,DATABASE_COMPATIBILITY from v$asm_diskgroup; NAME COMPATIBILITY DATABASE_COMPATIBILI ---------- -------------------- -------------------- DG_ARCH01 11.2.0.0.0 11.2.0.0.0 DG_SYS01 11.2.0.0.0 11.2.0.0.0 DG_TEMP01 11.2.0.0.0 11.2.0.0.0 DG_DATA01 11.2.0.0.0 11.2.0.0.05.) Move OCR file to ASM:
ocrconfig -replace /rmanbkp1/cludata/ocrfile -replacement +DG_ARCH01 ocrconfig -replace /rmanbkp1/cludata/ocrmirror -replacement +DG_SYS016.) Check OCR:
[root@host1 cludata]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3004 Available space (kbytes) : 259116 ID : 798462606 Device/File Name : +DG_ARCH01 Device/File integrity check succeeded Device/File Name : +DG_SYS01 Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded7.) Restart CRS to check if it comes back fine.
NOTE: You will not be able to stop just ASM now, as the OCR file is in ASM
8.) Check OCR:( as oracle user )
[oracle@host1]~% /home/oragrid/product/11.2/bin/cluvfy comp ocr -n all -verbose Verifying OCR integrity Checking OCR integrity... Checking the absence of a non-clustered configuration... All nodes free of non-clustered, local-only configurations ASM Running check passed. ASM is running on all cluster nodes Checking OCR config file "/etc/oracle/ocr.loc"... OCR config file "/etc/oracle/ocr.loc" check successful Disk group for ocr location "+DG_SYS01" available on all the nodes Disk group for ocr location "+DG_ARCH01" available on all the nodes Checking size of the OCR location "+DG_SYS01" ... Size check for OCR location "+DG_SYS01" successful... Size check for OCR location "+DG_SYS01" successful... Size check for OCR location "+DG_SYS01" successful... Size check for OCR location "+DG_SYS01" successful... Checking size of the OCR location "+DG_ARCH01" ... Size check for OCR location "+DG_ARCH01" successful... Size check for OCR location "+DG_ARCH01" successful... Size check for OCR location "+DG_ARCH01" successful... Size check for OCR location "+DG_ARCH01" successful... WARNING: This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR. OCR integrity check passed Verification of OCR integrity was successful.9.) Move Voting disk now
[root@host1 cludata]# crsctl replace votedisk +DG_ARCH01 CRS-4256: Updating the profile Successful addition of voting disk aa5d2a1024384faebf3f2c108e6b203a . Successful deletion of voting disk 253472991d324fb9bfab1f1301385697. Successfully replaced voting disk group with +DG_ARCH01. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced10.) Check Voting Disk:
[root@host1 cludata]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE aa5d2a1024384faebf3f2c108e6b203a (ORCL:DISK2) [DG_ARCH01] Located 1 voting disk(s). [root@host1 cludata]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online11.) Check contents of ocr.loc file
[root@host1 cludata]# more /etc/oracle/ocr.loc #Device/file /mnt/cludata/ocrmirror getting replaced by device +DG_SYS01 ocrconfig_loc=+DG_ARCH01 ocrmirrorconfig_loc=+DG_SYS01 local_only=falseALL LOOKS GOOD NOW !!!
Posted by Apun Hiran at 2:49 AM 2 Comments
Thursday, July 21, 2011
Oracle: Rename single/RAC Database
Method 1: Recreate control file with the new DB name, the old tried and tested method.
Method 2: Use new utility called NID
Steps to rename Databases using the new NID utility:
1.) Stop database
[oracle@testing1]~% srvctl status database -d test
Instance test1 is running on node testing1
Instance test2 is running on node testing2
srvctl stop database -d test
2.) startup mount the database:
[oracle@testing1]~% sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Thu Jul 21 00:50:44 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area 1.2527E+10 bytes
Fixed Size 2238104 bytes
Variable Size 8120174952 bytes
Database Buffers 4294967296 bytes
Redo Buffers 109346816 bytes
Database mounted.
3.) Exit and run the "nid" utility
nid sys/oracle@test DBNAME=new
[oracle@testing1]~% nid sys/m4gent4 DBNAME=new
DBNEWID: Release 11.2.0.2.0 - Production on Thu Jul 21 00:55:12 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
NID-00002: Parse error: LRM-00108: invalid positional parameter value 'sys/m4gent4'
Change of database ID failed during validation - database is intact.
DBNEWID - Completed with validation errors.
Correct the syntax:
nid target=sys/oracle@test DBNAME=new
[oracle@testing1]~% nid target=sys/m4gent4 DBNAME=new
DBNEWID: Release 11.2.0.2.0 - Production on Thu Jul 21 00:56:04 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to database test (DBID=3952648861)
NID-00120: Database should be mounted exclusively
Change of database name failed during validation - database is intact.
DBNEWID - Completed with validation errors.
Incase of RAC, we need to set cluster_database=FALSE to change DB name. Then stop and mount the DB again.
SQL> alter system set cluster_database=FALSE scope=spfile;
System altered.
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 1.2527E+10 bytes
Fixed Size 2238104 bytes
Variable Size 8120174952 bytes
Database Buffers 4294967296 bytes
Redo Buffers 109346816 bytes
Database mounted.
nid target=sys/oracle@test DBNAME=new
[oracle@testing1]~% nid target=sys/m4gent4 DBNAME=new
DBNEWID: Release 11.2.0.2.0 - Production on Thu Jul 21 00:59:54 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to database test (DBID=3952648861)
Connected to server version 11.2.0
Control Files in database:
+DG_SYS01/test/control01.ctl
+DG_DATA01/test/control02.ctl
Change database ID and database name test to NEW? (Y/[N]) => Y
Proceeding with operation
Changing database ID from 3952648861 to 388598347
Changing database name from test to NEW
Control File +DG_SYS01/test/control01.ctl - modified
Control File +DG_DATA01/test/control02.ctl - modified
Datafile +DG_SYS01/test/datafile/system.261.75517911 - dbid changed, wrote new name
Datafile +DG_SYS01/test/datafile/sysaux.262.75517911 - dbid changed, wrote new name
Datafile +DG_DATA01/test/datafile/undotbs1.258.75518469 - dbid changed, wrote new name
Datafile +DG_DATA01/test/datafile/undotbs2.260.75518577 - dbid changed, wrote new name
Datafile +DG_DATA01/test/datafile/users.257.75518565 - dbid changed, wrote new name
Control File +DG_SYS01/test/control01.ctl - dbid changed, wrote new name
Control File +DG_DATA01/test/control02.ctl - dbid changed, wrote new name
Instance shut down
Database name changed to NEW.
Modify parameter file and generate a new password file before restarting.
Database ID for database NEW changed to 388598347.
All previous backups and archived redo logs for this database are unusable.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.
4.) Copy the old inittest1.ora as initnew1.ora, update the DB_NAME, instance_name parameter to reflect the new name.
Then startup mount
SQL> startup mount pfile='/home/oracle/software/dbcreate/test/inittest1.ora'
ORACLE instance started.
Total System Global Area 1.2527E+10 bytes
Fixed Size 2238104 bytes
Variable Size 8120174952 bytes
Database Buffers 4294967296 bytes
Redo Buffers 109346816 bytes
Database mounted.
5.) Resetlog open the database.
SQL> ALTER DATABASE OPEN RESETLOGS;
Database altered.
SQL> select open_mode from v$database;
OPEN_MODE
--------------------
READ WRITE
6.) Create spfile
SQL> create spfile='+DG_SYS01/NEW/spfileNEW.ora' from pfile='/home/oracle/software/dbcreate/test/inittest1.ora';
File created.
NOTE: For single node DB, steps are over. Step 7 for RAC only.
7.) Add DB to the cluster
[oracle@testing1]~/software/dbcreate/test% srvctl add database -d NEW -o /home/oracle/product/11.2 -r primary -s OPEN -p +DG_SYS01/NEW/spfileNEW.ora
[oracle@testing1]~/software/dbcreate/test% srvctl add instance -d NEW -i NEW1 -n testing1
[oracle@testing1]~/software/dbcreate/test% srvctl add instance -d NEW -i NEW2 -n testing2
[oracle@testing1]~/software/dbcreate/test% srvctl config database -d NEW -a
Database unique name: NEW
Database name:
Oracle home: /home/oracle/product/11.2
Oracle user: oracle
Spfile: +DG_SYS01/NEW/spfileNEW.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: NEW
Database instances: NEW1,NEW2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
[oracle@testing1]~/software/dbcreate/test% vi inittest1.ora
[oracle@testing1]~/software/dbcreate/test% srvctl start database -d NEW
[oracle@testing1]~/software/dbcreate/test% srvctl status database -d NEW
Instance NEW1 is running on node testing1
Instance NEW2 is running on node testing2
INST_ID INSTANCE_NAM HOST_NAME STATUS STARTED VERSION
-------- ------------------------- ----------------------------------- ----------
1 new1 testing1 OPEN 20 Jul 18:25:20 11.2.0.2.0
2 new2 testing2 OPEN 20 Jul 18:25:22 11.2.0.2.0
Posted by Apun Hiran at 7:07 AM 6 Comments
Labels: change, database, dbnewid, nid, oracle, RAC, rename, rename database
Thursday, May 19, 2011
IMPDP Stuck with wait event - "wait for unread message on broadcast channel"
I was trying to run data import using impdp and the job was stuck with wait event "wait for unread message on broadcast channel".
This is an idle wait event as per oracle, that means oracle is waiting for something to happen for the session to proceed.
I googled and checked metalink but couldn't really find a solution and my job was stuck indefinitely.
From one of the forums online I figured out an old defunct job in the DB can cause new impdp sessions to be stuck.
Hence I queried dba_datapump_jobs and found out that I have an old job in "NOT RUNNING" state:
SQL> select * from dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
------------ ------------------------------ ------------ ------------ ------------------- ------------ ----------------- -----------------
SYSTEM NEW1 IMPORT TABLE EXECUTING 4 1 3
SYSTEM SYS_IMPORT_FULL_01 IMPORT FULL NOT RUNNING 0 0 0
Checked to find the underlying object:
SQL> SELECT a.object_id, a.object_type, a.owner, a.object_name, a.status
FROM dba_objects a, dba_datapump_jobs j
WHERE a.owner=j.owner_name AND a.object_name=j.job_name
and j.job_name='SYS_IMPORT_FULL_01';
OBJECT_ID OBJECT_TYPE OWNER OBJECT_NAME STATUS
------------ ------------------- -------------------- ---------------------- -------
984119 TABLE SYSTEM SYS_IMPORT_FULL_01 VALID
15:02:47 SQL> select owner,object_name,subobject_name, object_type,last_ddl_time from dba_objects where object_id=984119
/
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE LAST_DDL_TIME
-------------------- ---------------------------------------- ------------------------------ ------------------- -------------------
SYSTEM SYS_IMPORT_FULL_01 TABLE 13-08-2010 19:59:50
Dropped the datapump job table:
15:02:59 SQL> drop table SYSTEM.SYS_IMPORT_FULL_01;
Table dropped.
And then my impdp job "NEW1" started progressing:
15:04:03 SQL> select * from dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
------------ ------------------------------ ------------ ------------ ------------------- ------------ ----------------- -----------------
SYSTEM NEW1 IMPORT TABLE EXECUTING 4 1 6
NOTE: Please make sure that the old job is NOT RUNNING.
Secondly, if its a known job you can also try to:
impdp username/password attach=JOB_NAME
and then at the IMPDP prompt do a kill_job. This should also clear the jobs status from dba_datapump_jobs.
Posted by Apun Hiran at 5:13 AM 2 Comments
Thursday, January 27, 2011
Drop Materialized View takes a long time
Recently I had to drop a couple of large Materialized View.
And dropping them was taking a long time, as it tries to drop the data in both source and destination DB. In Source DB it tries to purge the mview log and at destination mview itself.
To accelerate the process I tried truncating the mview tables at destination and also the mview log table at source.
At destination (mview site):
truncate table mview_to_drop;
At source (mview log site):
select master,log_table from dba_mview_logs where master='MVIEW_TO_DROP';
LOG_OWNER MASTER LOG_TABLE
------------ ------------------------------ ------------------------------
SCOTT MVIEW_TO_DROP MLOG$_MVIEW_TO_DROP
truncate table SCOTT.MLOG$_MVIEW_TO_DROP;
Now back at destination site:
drop materialized view SCOTT.MVIEW_TO_DROP;
Materialized view dropped.
This is the fastest way I could find, please let me know if anyone else has any ideas.
Posted by Apun Hiran at 12:47 AM 4 Comments
Labels: drop mview, materialized view, oracle
Friday, January 14, 2011
How to Setup Resource Manager to Control DB Resources
Step By Step Setup:
Step 1: Create a work area for creation/modification of resource manager objects.
SQL> exec dbms_resource_manager.create_pending_area();
Step 2: Create Resource Manager PLAN
SQL> exec dbms_resource_manager.create_plan(plan => 'PLAN_NAME', comment => 'Comments abt the plan');
e.g.
SQL> exec dbms_resource_manager.create_plan( plan => 'ETL_PLAN', comment => 'PLAN for ETL processes');
Step 3: Create a consumer group which to which the PLAN details would be assigned
SQL> exec dbms_resource_manager.create_consumer_group ( consumer_group => 'COUNSUMER_GROUP_NAME' , comment =>'Comments abt the consumer_group');
e.g.
SQL> exec dbms_resource_manager.create_consumer_group ( consumer_group => 'ETL_GROUP' , comment =>'Check Max Degree of parallelism for ETL process');
Step 4: Assign the Consumer group to the Resource Plan created and define the resource limits
SQL> exec dbms_resource_manager.create_plan_directive( plan => 'ETL_PLAN',
group_or_subplan => 'ETL_GROUP',
comment => '',
cpu_p1 => NULL,
cpu_p2 => NULL,
cpu_p3 => NULL,
cpu_p4 => NULL,
cpu_p5 => NULL,
cpu_p6 => NULL,
cpu_p7 => NULL,
cpu_p8 => NULL,
parallel_degree_limit_p1 => 1,
active_sess_pool_p1 => NULL,
queueing_p1 => NULL,
switch_group => NULL,
switch_time => NULL,
switch_estimate => false,
max_est_exec_time => NULL,
undo_pool => NULL,
max_idle_time => NULL,
max_idle_blocker_time => NULL,
switch_time_in_call => NULL);
Examples:
1.) Limit max parallelism
SQL> exec dbms_resource_manager.create_plan_directive ( plan => 'ETL_PLAN',
group_or_subplan =>'ETL_GROUP',
comment => 'Limit CPU resource',
parallel_degree_limit_p1 => 4 );
2.) Disconnect Idle Users after a Predetermined time
SQL> exec dbms_resource_manager.create_plan_directive(plan => 'ETL_PLAN',
group_or_subplan => 'ETL_GROUP',
comment => 'Limit idle time',
max_idle_time => 300);
3.) Control CPU Resources
SQL> exec dbms_resource_manager.create_plan_directive ( plan => 'ETL_PLAN',
group_or_subplan =>'ETL_GROUP',
comment => 'Limit CPU resource',
cpu_p1 => 80 );
4.) Limit max number of concurrently active sessions
SQL> exec DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE ( plan => 'ETL_PLAN',
group_or_subplan =>'ETL_GROUP',
comment => 'Max 10 Active Sessions with 20 in queue, rest to timeout',
active_sess_pool_p1=>10,
queueing_p1=>20);
Step 5: Validate the PLAN setting.
SQL> exec dbms_resource_manager.validate_pending_area();
You will encounter the below error:
ERROR at line 1:
ORA-29382: validation of pending area failed
ORA-29377: consumer group OTHER_GROUPS is not part of top-plan ETL_PLAN
ORA-06512: at "SYS.DBMS_RMIN", line 437
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 798
ORA-06512: at line 2
Basically for each plan we need to define the plan directives for the OTHER_GROUPS also, what that means is determine limits for all other sessions other than those bound by the consumer group ETL_GROUP.
SQL> exec dbms_resource_manager.create_plan_directive ( plan => 'ETL_PLAN',
group_or_subplan =>'OTHER_GROUPS',
comment => 'Limit CPU resource',
parallel_degree_limit_p1 => 2);
SQL> exec dbms_resource_manager.validate_pending_area();
Step 6: Submit the changes/pending area that was created
SQL> exec dbms_resource_manager.submit_pending_area();
Step 7: Switch the consumer group of the schema/user
SQL> exec dbms_resource_manager.create_pending_area();
SQL> exec dbms_resource_manager_privs.grant_switch_consumer_group(grantee_name=>'SCOTT',consumer_group=>'ETL_GROUP',grant_option=>FALSE);
Make is the default/initial consumer group for the user.
SQL> exec dbms_resource_manager.set_initial_consumer_group(user => 'SCOTT',consumer_group =>'ETL_GROUP');
SQL> exec dbms_resource_manager.submit_pending_area();
Others:
Modify Consumer Group:
exec dbms_resource_manager.UPDATE_PLAN_DIRECTIVE( plan => 'ETL_PLAN', group_or_subplan => 'ETL_GROUP', new_comment => 'ETL_GROUP Change the DOP to 8', new_parallel_degree_limit_p1 => 8);
Posted by Apun Hiran at 4:41 AM 0 Comments
Labels: cpu, create_consumer_group, create_plan, create_plan_directive, dbms_resource_manager, dop, oracle, parallel, resouce manager, setup