Oracle DBA – A lifelong learning experience

Using DBMS_PARALLEL_EXECUTE

Posted by John Hallas on August 13, 2015

DBMS_PARALLEL_EXECUTE

We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans. When the same queries are run outside pl/sql either in sqlplus or sqldeveloper sessions the updates are given a parallel plan. We have a simple test pack that we have used to prove that this anomaly takes place at 11.1.0.7 (which is the version of the affected DB) and at 12.1.0.2 (to show that it is not an issue with just that version).
It appears that the optimizer is not aware of the fact that the alter session has been performed. We have also tried performing the alter session statement outside of the pl/sql block i.e. in native sqlplus environment, that also does not result in a parallel plan

Let me show a test case

Firstly we tried anonymous pl/sql block with an execute immediate for setting force dml for the session:

create table target_table

(

c1     number(6),

c2 varchar2(1024)

)

partition by range (c1)

(

partition p1 values less than (2),

partition p2 values less than (3),

partition p3 values less than (100)

)

;

create unique index target_table_pk on target_table (c1, c2) local;

alter table target_table add constraint target_table_pk primary key (c1, c2) using index;

create table source_table

(      c1     number(6),

c2       varchar2(1024)

);

insert /*+append */ into source_table (select distinct 2, owner||object_type||object_name from dba_objects);

commit;

select count(*) from source_table;

begin

execute immediate 'alter session force parallel dml';

insert /*+append parallel */ into target_table

select * from source_table;

end;

/

 

This load generates a serial plan

-----------------------------------------------------------------------------------
| Id | Operation         | Name         | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |             |       |       |   143 (100)|         |
|   1 | LOAD AS SELECT   |             |       |       |           |         |
|   2 |   TABLE ACCESS FULL| SOURCE_TABLE | 80198 |   40M|   143   (1)| 00:00:02 |
-----------------------------------------------------------------------------------

To see what the plan should look like if parallel dml was being used in a sqlplus session:

 

truncate table target_table;

alter session force parallel dml;

INSERT /*+append sqlplus*/ INTO TARGET_TABLE SELECT * FROM SOURCE_TABLE;

—————————————————————————————————————————

| Id | Operation                   | Name         | Rows | Bytes | Cost (%CPU)| Time     |   TQ |IN-OUT| PQ Distrib |

-------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |             |       |       |     5 (100)|         |       |     |           |
|   1 | PX COORDINATOR             |             |       |       |           |         |       |     |           |
|   2 |   PX SEND QC (RANDOM)       | :TQ10002     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | P->S | QC (RAND) |
|   3 |   INDEX MAINTENANCE       | TARGET_TABLE |       |       |           |         | Q1,02 | PCWP |           |
|   4 |     PX RECEIVE            |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | PCWP |           |
|   5 |     PX SEND RANGE         | :TQ10001     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | P->P | RANGE     |
|   6 |       LOAD AS SELECT       |              |       |       |           |         | Q1,01 | PCWP |           |
|   7 |       PX RECEIVE           |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | PCWP |           |
|   8 |         PX SEND RANDOM LOCAL| :TQ10000     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | P->P | RANDOM LOCA|
|   9 |         PX BLOCK ITERATOR |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWC |           |
|* 10 |           TABLE ACCESS FULL | SOURCE_TABLE | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWP |           |

 

truncate table target_table;

 

 

Oracle offered us two pieces of advice

  • Use a PARALLEL_ENABLE clause through a function
  • Use the DBMS_PARALLEL_EXECUTE package to achieve parallelism. (this is only available 11.2 onwards)

They also referred us to BUG 12734028 – PDML ONLY FROM PL/SQL DOES NOT WORK CORRECTLY
How To Enable Parallel Query For A Function? ( Doc ID 1093773.1 )

We did try the first option, the function but that failed and we did not move forward on that, concentrating on the DBMS_PARALLEL_EXECUTE package.

So the rest of this blog is around how our testing went and what results we achieved.

Starting with the same source_table contents, and an empty target table, a task needs to be created:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name =>DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

The task name could also be set to manually, but this method does not lend itself to being proceduralised, as the name needs to be unique.

To determine the identifier for the task:

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE status = 'CREATED';

 

TASK_NAME                                CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_141                               UNDELARED   CREATED

 

Now the source_table data set must be split into chunks in order to set up discrete subsets of data that will be handled by the subordinate tasks. This demo will split the table by rowid, but it can also be split using block counts or using the values contained in a specific column in the table.

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => 'TASK$_141',-

table_owner = >'SYS',-

table_name = > 'SOURCE_TABLE',-

by_row     = > TRUE,-

chunk_size = > 20000)

 

 

Note that there are three procedures that can be used to create chunks

PROCEDURE CREATE_CHUNKS_BY_NUMBER_COL

PROCEDURE CREATE_CHUNKS_BY_ROWID

PROCEDURE CREATE_CHUNKS_BY_SQL

 

As the chunk size is set to 20000 , and the table contains just over 80000 rows, one would expect 5 chunks to be created:

col CHUNK_ID form 999

col TASK_NAME form a10

col START_ROWID form a20

col end_rowid form a20

set pages 60

select CHUNK_ID, TASK_NAME, STATUS, START_ROWID, END_ROWID from user_parallel_execute_chunks where TASK_NAME = 'TASK$_27' order by 1;

can we count rows in each chunk id? Desc the table

 

CHUNK_ID TASK_NAME               STATUS               START_ROWID       END_ROWID

———- ———————— ——————– —————— ——————

142 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtoAAA AAA2sTAABAAAbtvCcP

143 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtwAAA AAA2sTAABAAAbt3CcP

144 TASK$_141               UNASSIGNED           AAA2sTAABAAAbt4AAA AAA2sTAABAAAbt/CcP

145 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSAAAA AAA2sTAABAAAdSHCcP

146 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSIAAA AAA2sTAABAAAdSPCcP

147 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSQAAA AAA2sTAABAAAdSXCcP

148 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSYAAA AAA2sTAABAAAdSfCcP

149 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSgAAA AAA2sTAABAAAdSnCcP

150 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSoAAA AAA2sTAABAAAdSvCcP

151 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSwAAA AAA2sTAABAAAdS3CcP

152 TASK$_141               UNASSIGNED           AAA2sTAABAAAdS4AAA AAA2sTAABAAAdS/CcP

153 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTAAAA AAA2sTAABAAAdTHCcP

154 TASK$_141               UNASSIGNED          AAA2sTAABAAAdTIAAA AAA2sTAABAAAdTPCcP

155 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTQAAA AAA2sTAABAAAdTXCcP

156 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTYAAA AAA2sTAABAAAdTfCcP

157 TASK$_141                UNASSIGNED           AAA2sTAABAAAdTgAAA AAA2sTAABAAAdTnCcP

158 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUAAAA AAA2sTAABAAAdUxCcP

159 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUyAAA AAA2sTAABAAAdVjCcP

160 TASK$_141               UNASSIGNED           AAA2sTAABAAAdVkAAA AAA2sTAABAAAdV/CcP

161 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWAAAA AAA2sTAABAAAdWxCcP

162 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWyAAA AAA2sTAABAAAdXjCcP

163 TASK$_141               UNASSIGNED           AAA2sTAABAAAdXkAAA AAA2sTAABAAAdX/CcP

164 TASK$_141               UNASSIGNED           AAA2sTAABAAAdYAAAA AAA2sTAABAAAdYxCcP

165 TASK$_141                UNASSIGNED           AAA2sTAABAAAdYyAAA AAA2sTAABAAAdZjCcP

166 TASK$_141               UNASSIGNED           AAA2sTAABAAAdZkAAA AAA2sTAABAAAdZ/CcP

167 TASK$_141               UNASSIGNED           AAA2sTAABAAAdaAAAA AAA2sTAABAAAdaxCcP

168 TASK$_141               UNASSIGNED           AAA2sTAABAAAdayAAA AAA2sTAABAAAdbjCcP

169 TASK$_141               UNASSIGNED           AAA2sTAABAAAdbkAAA AAA2sTAABAAAdb/CcP

 

28 rows selected.

Tests were run changing the chunk_size to 40000 and still 28 chunks were created.

Looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               CHUNK_TYPE   STATUS

---------------------------------------- ------------ -------------------

TASK$_141                               ROWID_RANGE CHUNKED

It details that the task has been split into discrete ranges of data, and shows how these ranges were determined.

To execute the insert into the target table we create an anonymous pl/sql block, declare sql that is to be run with the addition of an additional predicate for a range of rowids:

 

The ranges are passed in from the ranges specified in the user_parallel_execute_chunks table

 

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := 'insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id';

dbms_parallel_execute.run_task(task_name     = > 'TASK$_141',

sql_stmt       = > v_sql,

language_flag = > DBMS_SQL.NATIVE,

parallel_level = >; 5);

END;

/

PL/SQL procedure successfully completed.

Checking the contents of the target table:

select count(*) from target_table;

COUNT(*)

----------

86353

Now looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               STATUS

---------------------------------------- -------------------

TASK$_141                               FINISHED

 

Looking at the user_parallel_execute_chunks entries after completion:

select CHUNK_ID, JOB_NAME, START_TS, END_TS

from user_parallel_execute_chunks

where TASK_NAME = 'TASK$_141' order by 2,3;

CHUNK_ID JOB_NAME             START_TS                       END_TS

---------- -------------------- ------------------------------ -------------------------------

142 TASK$_7410_1         30-JUL-15 02.34.31.672484 PM   30-JUL-15 02.34.31.872794 PM

144 TASK$_7410_1         30-JUL-15 02.34.31.877091 PM   30-JUL-15 02.34.32.204513 PM

146 TASK$_7410_1         30-JUL-15 02.34.32.209950 PM   30-JUL-15 02.34.32.331349 PM

148 TASK$_7410_1         30-JUL-15 02.34.32.335192 PM   30-JUL-15 02.34.32.528391 PM

150 TASK$_7410_1         30-JUL-15 02.34.32.533488 PM   30-JUL-15 02.34.32.570243 PM

152 TASK$_7410_1         30-JUL-15 02.34.32.575450 PM   30-JUL-15 02.34.32.702353 PM

154 TASK$_7410_1         30-JUL-15 02.34.32.710860 PM   30-JUL-15 02.34.32.817684 PM

156 TASK$_7410_1         30-JUL-15 02.34.32.828963 PM   30-JUL-15 02.34.32.888834 PM

158 TASK$_7410_1         30-JUL-15 02.34.32.898458 PM   30-JUL-15 02.34.33.493985 PM

160 TASK$_7410_1         30-JUL-15 02.34.33.499254 PM   30-JUL-15 02.34.33.944356 PM

162 TASK$_7410_1         30-JUL-15 02.34.33.953509 PM   30-JUL-15 02.34.34.366352 PM

164 TASK$_7410_1         30-JUL-15 02.34.34.368668 PM   30-JUL-15 02.34.34.911471 PM

166 TASK$_7410_1         30-JUL-15 02.34.34.915205 PM   30-JUL-15 02.34.35.524515 PM

168 TASK$_7410_1         30-JUL-15 02.34.35.527515 PM   30-JUL-15 02.34.35.889198 PM

169 TASK$_7410_1         30-JUL-15 02.34.35.889872 PM   30-JUL-15 02.34.35.890412 PM

143 TASK$_7410_2         30-JUL-15 02.34.31.677235 PM   30-JUL-15 02.34.32.129181 PM

145 TASK$_7410_2         30-JUL-15 02.34.32.135013 PM   30-JUL-15 02.34.32.304761 PM

147 TASK$_7410_2         30-JUL-15 02.34.32.310140 PM   30-JUL-15 02.34.32.485545 PM

149 TASK$_7410_2         30-JUL-15 02.34.32.495971 PM   30-JUL-15 02.34.32.550955 PM

151 TASK$_7410_2         30-JUL-15 02.34.32.558335 PM   30-JUL-15 02.34.32.629274 PM

153 TASK$_7410_2         30-JUL-15 02.34.32.644917 PM   30-JUL-15 02.34.32.764337 PM

155 TASK$_7410_2         30-JUL-15 02.34.32.773029 PM   30-JUL-15 02.34.32.857794 PM

157 TASK$_7410_2         30-JUL-15 02.34.32.864875 PM   30-JUL-15 02.34.32.908799 PM

159 TASK$_7410_2         30-JUL-15 02.34.32.913982 PM   30-JUL-15 02.34.33.669704 PM

161 TASK$_7410_2         30-JUL-15 02.34.33.672077 PM   30-JUL-15 02.34.34.128170 PM

163 TASK$_7410_2         30-JUL-15 02.34.34.140102 PM   30-JUL-15 02.34.34.624627 PM

165 TASK$_7410_2         30-JUL-15 02.34.34.628145 PM   30-JUL-15 02.34.35.431037 PM

167 TASK$_7410_2         30-JUL-15 02.34.35.433282 PM   30-JUL-15 02.34.35.885741 PM
28 rows selected.

 

From these details there appears to be only two jobs processing the sub-tasks even though a parallel_level of 5 was specified.   Why is that? Not enough data to break it up more? Back end resourcing?????

Looking at the details of the query in a tkprof listing of one of the jobs’ trace files you can see that each of the jobs is executing the same plan that was generated with the original pl/sql: , however they are running two tasks in parallel

 

insert /*+PARALLEL APPEND */ into target_table
           select * from
source_table
             where rowid between :start_id and :end_id


call     count       cpu   elapsed       disk     query   current       rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse       1     0.00       0.00         0         0         0           0
Execute     6     0.26       3.03         4       599       8128       17849
Fetch        0     0.00       0.00         0         0         0           0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total       7     0.26       3.03         4       599       8128       17849

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
         0         0         0 LOAD AS SELECT (cr=59 pr=0 pw=8 time=92689 us)
     1240       1240       1240   FILTER (cr=13 pr=0 pw=0 time=501 us)
     1240       1240       1240   TABLE ACCESS BY ROWID RANGE SOURCE_TABLE (cr=13 pr=0 pw=0 time=368 us cost=42 size=122892 card=228)

 

I think that I could do more testing in this area and try and determine answers to the following questions

 

Why is the parallel parameter not appearing to work?

Does it depend on data volumes or back-end resources?

 

What I was interested in seeing was what was the breakdown of chunks by row count – was it an even split?

select a.chunk_id, count(*)

from user_parallel_execute_chunks a,

source_table b

where a.TASK_NAME = 'TASK$_107'

and b.rowid between a.START_ROWID and a.END_ROWID

group by a.chunk_id

order by 1;

  CHUNK_ID   COUNT(*)
---------- ----------
       108       1407
       109       1233
       110       1217
       111       1193
       112       1192
       113       1274
       114       1213
       115       1589
       116       1191
       117       1226
       118       1190
       119       1201
       120       1259
       121       1273
       122       1528
       123       1176
       124      15874
       125       4316
       126      15933
       127       4283
       128       9055

Reasonably even with most chunks selecting around 1200 rows each but 2 chunks had 15K in each so not that well chunked.

Repeating the process but chunking by blocks this time produced the following results

 

 

SQL> select task_name, chunk_id, dbms_rowid.rowid_block_number(start_rowid) Start_Id,

2 dbms_rowid.rowid_block_number(end_rowid) End_Id,

3 dbms_rowid.rowid_block_number(end_rowid) – dbms_rowid.rowid_block_number(start_rowid) num_blocks

4 from user_parallel_execute_chunks order by task_name, chunk_id;

 

TASK_NAME             CHUNK_ID   START_ID     END_ID NUM_BLOCKS

——————– ———- ———- ———- ———-

 

TASK$_133                   134     70096     70103         7

TASK$_133                   135     70104     70111         7

TASK$_133                   136     70112     70119         7

TASK$_133                   137     70120     70127         7

TASK$_133                   138     70128     70135         7

TASK$_133                   139     70136     70143         7

TASK$_133                   140     73344     73351        7

TASK$_133                   141     73352     73359         7

TASK$_133                   142     73360     73367         7

TASK$_133                   143     73368     73375         7

TASK$_133                   144     73376     73383        7

TASK$_133                   145     73384     73391         7

TASK$_133                   146     73392     73399         7

TASK$_133                   147     73400     73407         7

TASK$_133                   148     73408     73415         7

TASK$_133                   149     73416     73423         7

TASK$_133                   150     73472     73521         49

TASK$_133                   151     73522     73571         49

TASK$_133                   152     73572     73599         27

TASK$_133                   153     73600     73649         49

TASK$_133                   154     73650     73699         49

TASK$_133                   155     73700     73727         27

TASK$_133                   156     73728      73777         49

TASK$_133                   157     73778     73827         49

TASK$_133                   158     73828     73855         27

 

So again not very well split. Perhaps there needs to be more data to make it worthwhile.

 

Error handling:  

 

Finally we can have a look at how it handles errors. To save the reader time the simple answer is ‘not very well’.

Let’s force a duplicate key error to show how the package handles errors. The original 80000+ rows are left in the target_table, and the same set of data will be reinserted – which will cause a unique constraint violation for the PK:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name => DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

 

PL/SQL procedure successfully completed.

 

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

where status = ‘CREATED’;

 

TASK_NAME                               CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_170                               UNDELARED   CREATED

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_170’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => TRUE,-

chunk_size => 40000)

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := ‘insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id’;

 

dbms_parallel_execute.run_task(task_name     => ‘TASK$_170’,

sql_stmt       => v_sql,

language_flag => DBMS_SQL.NATIVE,

parallel_level => 10);

END;

/

 

PL/SQL procedure successfully completed.

 

select count(*) from target_table;

 

COUNT(*)

———-

86353

No error is thrown by the run_task call, but the number of rows has not increased. Checking the status of the task shows that there was indeed an error:

 

select task_name, status from user_parallel_execute_tasks where task_name = ‘TASK$_170’);

 

TASK_NAME                               STATUS

—————————————- ——————-

TASK$_170                               FINISHED_WITH_ERROR

The error details are given in the user_parallel_execute_chunks view:

 

select TASK_NAME, ERROR_MESSAGE, STATUS

from user_parallel_execute_chunks

where TASK_NAME = ‘TASK$_170’ order by 2

 

TASK_NAME       ERROR_MESSAGE                                               STATUS

————— ———————————————————— —————————————-

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170      ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170                                                                   PROCESSED

TASK$_170                                                                   PROCESSED

 

28 rows selected.

SQL> /

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_133’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => FALSE,-

chunk_size => 20);

Posted in 11g new features, Oracle | Tagged: , , , , | 5 Comments »

Migrating tablespaces across endian platforms

Posted by John Hallas on August 13, 2015

T Read the rest of this entry »

Posted in 11g new features, ASM, Oracle | Tagged: , , , | Leave a Comment »

Creating standby database inc DG Broker and 12c changes

Posted by John Hallas on July 29, 2015

I thought I would refresh my knowledge of creating a standby database and at the same time include some DataGuard Broker configuration which also throws in some changes that came along with 12c

Overview

Database Name QUICKIE host server 1 ASM disk

Database Name STAN host server 2 ASM disk

Create a standby database STAN using ACTIVE DUPLICATE from the source database QUICKIE

 

QUICKIE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = QUICKIE)
)
)

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)

server2 – listener.ora – note I have selected 1524 as that port is not currently in use and I do not want to interfere with any existing databases

 

LISTENERCLONE =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
)

(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = STAN)
(ORACLE_HOME = /app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = STAN)
)
)

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

server2 – tnsnames.ora

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)
)

LISTENERCLONE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
(CONNECT_DATA =
(SERVICE_NAME = STAN)
)

 

  1. Start clone listener on server2

Read the rest of this entry »

Posted in 12c new features, Oracle | Tagged: , , , | 1 Comment »

Startup PDB databases automatically when a container is started – good idea?

Posted by John Hallas on July 27, 2015

I posed a note on the Oracle-L Mailing list around pluggable database and why they were not opened automatically by default when the container database was opened. The post is below

I am trying to get my head around the thing about how pluggable databases react after the container database is restarted.

Pre 12.1.0.2 it was necessary to put a startup trigger in to run a ‘alter pluggable database all open;’ command to move them from mounted to open.

Now 12.1.0.2 allows you to save a state in advance using ‘alter pluggable database xxx save state’ which does seem a step forward

However why would the default not be to start all the pluggable databases (or services as they are seen) not leave them in a mounted state. Obviously Oracle have thought about this and changed the trigger method, maybe due to customer feedback but I wonder why they have not gone the whole hog and started the services automatically.

I would much prefer to have the default to be up and running rather than relying on the fact that I have saved the state previously

I did get some interesting and very helpful responses. Jared Still made a couple of good points. The first being that the opening time for all the pluggable databases might be very long if you had 300 of them. That blew my mind a little and I must admit that I had considered scenarios where you might have half a dozen maximum, not into the hundreds.

I did a little test on a virtual 2 CPU, 16Gb server, already loaded with 6 running non container databases. I created 11 pluggables (I have created a new word there) from an existing one – each one took less than 2 minutes Read the rest of this entry »

Posted in 12c new features | Tagged: , , , | 1 Comment »

Issues around recreating a standby database in 12c

Posted by John Hallas on July 22, 2015

When you create a database in 12C it now creates a resource in HAS/CRS , which isn’t a problem

However, when you come to recreate a standby database, probably because it has got such a big lag that it is quicker to recreate than recover the log files, then you will see the following error message :-

 

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/22/2015 15:45:57
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on clone_default channel at 07/22/2015 15:45:57
RMAN-11003: failure during parse/execution of SQL statement: alter system set  db_unique_name =  'STAN' comment= '' scope=spfile
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

It did stump me for a while and I thought it was around having files in the ASM DATA group from the previous incarnation but removing them did not solve it.

The word ‘resource’ gave me a clue and looking at the resources using srvctl I could see that the database STAN already existed

srvctl status database -d STAN

Database is not running.

So the fix was obvious – and indeed the error message was accurate.

srvctl remove database -d STAN

Remove the database STAN? (y/[n]) Y

Posted in 12c new features, Oracle | Tagged: , , , , , | Leave a Comment »

Large audit trail table causes high db activity – especially when using OEM

Posted by John Hallas on June 24, 2015

On various databases, apparently unrelated we have noticed high activity that seems to be associated with the query below. The quieter the database the more the query stands out.

 

SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count, TO_CHAR(MIN(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS first_occur_time, TO_CHAR(MAX(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS last_occur_time
FROM sys.dba_audit_session
WHERE returncode != 0 AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')

Read the rest of this entry »

Posted in Oracle | Tagged: , , , , , | Leave a Comment »

Cloud agent set to DEBUG causing out of memory errors

Posted by John Hallas on June 23, 2015

The following technical detail was put together by a colleague John Evans and have taken it , with his permission, and wrapped some more detail around it as it seemed to be of real value to anybody who might have upgraded an agent to 12.1.0.4

Following an upgrade of the EM agent from 12.1.0.2 (or 12.1.0.3) to 12.1.0.4 after about 90 days of usage we saw a number of agents failing with out of memory errors.

We traced this down to a line in the properties file where the trace level of parameter Logger.sdklog.level=DEBUG  rather than INFO Read the rest of this entry »

Posted in Grid control and agents, Oracle | Leave a Comment »

One million blog views reached

Posted by John Hallas on June 22, 2015

This morning I will pass the 1 million mark for hits on this blog. My first post was written in 2008 and I remember being quite pleased with myself when I reached 5000 hits, I never dreamt of getting 1 million.

The post with the most number of hits is https://jhdba.wordpress.com/2009/05/19/purging-statistics-from-the-sysaux-tablespace/

One that I get still comments on now saying how well it explains what the SQL92_SECURITY parameter actually does

https://jhdba.wordpress.com/2009/11/23/what-does-the-sql92_security-parameter-actually-do/

I think my current favourite is a piece that I wrote with no preparation in an hour about what I think a good DBA Manager should be doing. It does not mean I am such a person but it does demonstrate what I think is important about that role

https://jhdba.wordpress.com/2013/10/10/the-10-commandments-of-being-a-good-dba-manager/

I still use many of the posts myself because the main reason for starting the blog was to capture issues and problems I had come across. One script I find very useful is

https://jhdba.wordpress.com/2011/02/28/scripts-to-resize-standby-redolog-files/

and https://jhdba.wordpress.com/2012/03/06/the-mother-of-all-asm-scripts/ is a standard script we use a lot at my site and is invaluable

Thanks to everyone for reading and being followers. I will continue to write new posts just as long as I keep on doing technical hands-on work although I am spending more time managing than doing these days. I still really enjoy being an Oracle DBA and I have never regretted moving back into that area after spending several years as a technical project manager – which I enjoyed a lot and I still do some PM’ing  now. I was the saddo who was spotted on a beach in Majorca on a family holiday reading Jonathan Lewis’s 8i book

Posted in Oracle | 1 Comment »

Discrepancies in v$parameter default values in 12c

Posted by John Hallas on April 10, 2015

In my last blog about security parameters I mentioned I had found some oddities in the default values for parameters in 12.1.0.2, this is a more in-depth analysis of my findings.

Taking the parameter SEC_RETURN_SERVER_RELEASE_BANNER as an example.

Prior to 12c the default value for this parameter was ‘FALSE’, whereas the documentation for 12c (https://docs.oracle.com/database/121/REFRN/refrn10275.htm) states that the default is ‘TRUE’.

To confirm this, I made a connection to a 12c (12.1.0.2) database and ran the following query:

select name, value,  default_value,  isdefault

from v$parameter

where name = 'sec_return_server_release_banner';
NAME                                     VALUE               DEFAULT_VALUE       ISDEFAULT
---------------------------------------- -------------------- -------------------- ---------
sec_return_server_release_banner         FALSE               TRUE                 TRUE

After confirming that the parameter had not been explicitly set in the parameter file, or as part of an alter system/session command, we could see that the actual value, held in ‘VALUE’, given to the parameter does not match the value provided by ‘DEFAULT_VALUE’ nor did it match the value it should have been assigned according to the documentation. Read the rest of this entry »

Posted in 11g new features, 12c new features, Oracle | Tagged: , , , , , , , | 2 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 261 other followers