Oracle DBA – A lifelong learning experience

Using DBMS_PARALLEL_EXECUTE

Posted by John Hallas on August 13, 2015

DBMS_PARALLEL_EXECUTE

We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans. When the same queries are run outside pl/sql either in sqlplus or sqldeveloper sessions the updates are given a parallel plan. We have a simple test pack that we have used to prove that this anomaly takes place at 11.1.0.7 (which is the version of the affected DB) and at 12.1.0.2 (to show that it is not an issue with just that version).
It appears that the optimizer is not aware of the fact that the alter session has been performed. We have also tried performing the alter session statement outside of the pl/sql block i.e. in native sqlplus environment, that also does not result in a parallel plan

Let me show a test case

Firstly we tried anonymous pl/sql block with an execute immediate for setting force dml for the session:

create table target_table

(

c1     number(6),

c2 varchar2(1024)

)

partition by range (c1)

(

partition p1 values less than (2),

partition p2 values less than (3),

partition p3 values less than (100)

)

;

create unique index target_table_pk on target_table (c1, c2) local;

alter table target_table add constraint target_table_pk primary key (c1, c2) using index;

create table source_table

(      c1     number(6),

c2       varchar2(1024)

);

insert /*+append */ into source_table (select distinct 2, owner||object_type||object_name from dba_objects);

commit;

select count(*) from source_table;

begin

execute immediate 'alter session force parallel dml';

insert /*+append parallel */ into target_table

select * from source_table;

end;

/

 

This load generates a serial plan

-----------------------------------------------------------------------------------
| Id | Operation         | Name         | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |             |       |       |   143 (100)|         |
|   1 | LOAD AS SELECT   |             |       |       |           |         |
|   2 |   TABLE ACCESS FULL| SOURCE_TABLE | 80198 |   40M|   143   (1)| 00:00:02 |
-----------------------------------------------------------------------------------

To see what the plan should look like if parallel dml was being used in a sqlplus session:

 

truncate table target_table;

alter session force parallel dml;

INSERT /*+append sqlplus*/ INTO TARGET_TABLE SELECT * FROM SOURCE_TABLE;

—————————————————————————————————————————

| Id | Operation                   | Name         | Rows | Bytes | Cost (%CPU)| Time     |   TQ |IN-OUT| PQ Distrib |

-------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |             |       |       |     5 (100)|         |       |     |           |
|   1 | PX COORDINATOR             |             |       |       |           |         |       |     |           |
|   2 |   PX SEND QC (RANDOM)       | :TQ10002     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | P->S | QC (RAND) |
|   3 |   INDEX MAINTENANCE       | TARGET_TABLE |       |       |           |         | Q1,02 | PCWP |           |
|   4 |     PX RECEIVE            |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | PCWP |           |
|   5 |     PX SEND RANGE         | :TQ10001     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | P->P | RANGE     |
|   6 |       LOAD AS SELECT       |              |       |       |           |         | Q1,01 | PCWP |           |
|   7 |       PX RECEIVE           |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | PCWP |           |
|   8 |         PX SEND RANDOM LOCAL| :TQ10000     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | P->P | RANDOM LOCA|
|   9 |         PX BLOCK ITERATOR |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWC |           |
|* 10 |           TABLE ACCESS FULL | SOURCE_TABLE | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWP |           |

 

truncate table target_table;

 

 

Oracle offered us two pieces of advice

  • Use a PARALLEL_ENABLE clause through a function
  • Use the DBMS_PARALLEL_EXECUTE package to achieve parallelism. (this is only available 11.2 onwards)

They also referred us to BUG 12734028 – PDML ONLY FROM PL/SQL DOES NOT WORK CORRECTLY
How To Enable Parallel Query For A Function? ( Doc ID 1093773.1 )

We did try the first option, the function but that failed and we did not move forward on that, concentrating on the DBMS_PARALLEL_EXECUTE package.

So the rest of this blog is around how our testing went and what results we achieved.

Starting with the same source_table contents, and an empty target table, a task needs to be created:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name =>DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

The task name could also be set to manually, but this method does not lend itself to being proceduralised, as the name needs to be unique.

To determine the identifier for the task:

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE status = 'CREATED';

 

TASK_NAME                                CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_141                               UNDELARED   CREATED

 

Now the source_table data set must be split into chunks in order to set up discrete subsets of data that will be handled by the subordinate tasks. This demo will split the table by rowid, but it can also be split using block counts or using the values contained in a specific column in the table.

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => 'TASK$_141',-

table_owner = >'SYS',-

table_name = > 'SOURCE_TABLE',-

by_row     = > TRUE,-

chunk_size = > 20000)

 

 

Note that there are three procedures that can be used to create chunks

PROCEDURE CREATE_CHUNKS_BY_NUMBER_COL

PROCEDURE CREATE_CHUNKS_BY_ROWID

PROCEDURE CREATE_CHUNKS_BY_SQL

 

As the chunk size is set to 20000 , and the table contains just over 80000 rows, one would expect 5 chunks to be created:

col CHUNK_ID form 999

col TASK_NAME form a10

col START_ROWID form a20

col end_rowid form a20

set pages 60

select CHUNK_ID, TASK_NAME, STATUS, START_ROWID, END_ROWID from user_parallel_execute_chunks where TASK_NAME = 'TASK$_27' order by 1;

can we count rows in each chunk id? Desc the table

 

CHUNK_ID TASK_NAME               STATUS               START_ROWID       END_ROWID

———- ———————— ——————– —————— ——————

142 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtoAAA AAA2sTAABAAAbtvCcP

143 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtwAAA AAA2sTAABAAAbt3CcP

144 TASK$_141               UNASSIGNED           AAA2sTAABAAAbt4AAA AAA2sTAABAAAbt/CcP

145 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSAAAA AAA2sTAABAAAdSHCcP

146 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSIAAA AAA2sTAABAAAdSPCcP

147 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSQAAA AAA2sTAABAAAdSXCcP

148 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSYAAA AAA2sTAABAAAdSfCcP

149 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSgAAA AAA2sTAABAAAdSnCcP

150 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSoAAA AAA2sTAABAAAdSvCcP

151 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSwAAA AAA2sTAABAAAdS3CcP

152 TASK$_141               UNASSIGNED           AAA2sTAABAAAdS4AAA AAA2sTAABAAAdS/CcP

153 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTAAAA AAA2sTAABAAAdTHCcP

154 TASK$_141               UNASSIGNED          AAA2sTAABAAAdTIAAA AAA2sTAABAAAdTPCcP

155 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTQAAA AAA2sTAABAAAdTXCcP

156 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTYAAA AAA2sTAABAAAdTfCcP

157 TASK$_141                UNASSIGNED           AAA2sTAABAAAdTgAAA AAA2sTAABAAAdTnCcP

158 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUAAAA AAA2sTAABAAAdUxCcP

159 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUyAAA AAA2sTAABAAAdVjCcP

160 TASK$_141               UNASSIGNED           AAA2sTAABAAAdVkAAA AAA2sTAABAAAdV/CcP

161 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWAAAA AAA2sTAABAAAdWxCcP

162 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWyAAA AAA2sTAABAAAdXjCcP

163 TASK$_141               UNASSIGNED           AAA2sTAABAAAdXkAAA AAA2sTAABAAAdX/CcP

164 TASK$_141               UNASSIGNED           AAA2sTAABAAAdYAAAA AAA2sTAABAAAdYxCcP

165 TASK$_141                UNASSIGNED           AAA2sTAABAAAdYyAAA AAA2sTAABAAAdZjCcP

166 TASK$_141               UNASSIGNED           AAA2sTAABAAAdZkAAA AAA2sTAABAAAdZ/CcP

167 TASK$_141               UNASSIGNED           AAA2sTAABAAAdaAAAA AAA2sTAABAAAdaxCcP

168 TASK$_141               UNASSIGNED           AAA2sTAABAAAdayAAA AAA2sTAABAAAdbjCcP

169 TASK$_141               UNASSIGNED           AAA2sTAABAAAdbkAAA AAA2sTAABAAAdb/CcP

 

28 rows selected.

Tests were run changing the chunk_size to 40000 and still 28 chunks were created.

Looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               CHUNK_TYPE   STATUS

---------------------------------------- ------------ -------------------

TASK$_141                               ROWID_RANGE CHUNKED

It details that the task has been split into discrete ranges of data, and shows how these ranges were determined.

To execute the insert into the target table we create an anonymous pl/sql block, declare sql that is to be run with the addition of an additional predicate for a range of rowids:

 

The ranges are passed in from the ranges specified in the user_parallel_execute_chunks table

 

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := 'insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id';

dbms_parallel_execute.run_task(task_name     = > 'TASK$_141',

sql_stmt       = > v_sql,

language_flag = > DBMS_SQL.NATIVE,

parallel_level = >; 5);

END;

/

PL/SQL procedure successfully completed.

Checking the contents of the target table:

select count(*) from target_table;

COUNT(*)

----------

86353

Now looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               STATUS

---------------------------------------- -------------------

TASK$_141                               FINISHED

 

Looking at the user_parallel_execute_chunks entries after completion:

select CHUNK_ID, JOB_NAME, START_TS, END_TS

from user_parallel_execute_chunks

where TASK_NAME = 'TASK$_141' order by 2,3;

CHUNK_ID JOB_NAME             START_TS                       END_TS

---------- -------------------- ------------------------------ -------------------------------

142 TASK$_7410_1         30-JUL-15 02.34.31.672484 PM   30-JUL-15 02.34.31.872794 PM

144 TASK$_7410_1         30-JUL-15 02.34.31.877091 PM   30-JUL-15 02.34.32.204513 PM

146 TASK$_7410_1         30-JUL-15 02.34.32.209950 PM   30-JUL-15 02.34.32.331349 PM

148 TASK$_7410_1         30-JUL-15 02.34.32.335192 PM   30-JUL-15 02.34.32.528391 PM

150 TASK$_7410_1         30-JUL-15 02.34.32.533488 PM   30-JUL-15 02.34.32.570243 PM

152 TASK$_7410_1         30-JUL-15 02.34.32.575450 PM   30-JUL-15 02.34.32.702353 PM

154 TASK$_7410_1         30-JUL-15 02.34.32.710860 PM   30-JUL-15 02.34.32.817684 PM

156 TASK$_7410_1         30-JUL-15 02.34.32.828963 PM   30-JUL-15 02.34.32.888834 PM

158 TASK$_7410_1         30-JUL-15 02.34.32.898458 PM   30-JUL-15 02.34.33.493985 PM

160 TASK$_7410_1         30-JUL-15 02.34.33.499254 PM   30-JUL-15 02.34.33.944356 PM

162 TASK$_7410_1         30-JUL-15 02.34.33.953509 PM   30-JUL-15 02.34.34.366352 PM

164 TASK$_7410_1         30-JUL-15 02.34.34.368668 PM   30-JUL-15 02.34.34.911471 PM

166 TASK$_7410_1         30-JUL-15 02.34.34.915205 PM   30-JUL-15 02.34.35.524515 PM

168 TASK$_7410_1         30-JUL-15 02.34.35.527515 PM   30-JUL-15 02.34.35.889198 PM

169 TASK$_7410_1         30-JUL-15 02.34.35.889872 PM   30-JUL-15 02.34.35.890412 PM

143 TASK$_7410_2         30-JUL-15 02.34.31.677235 PM   30-JUL-15 02.34.32.129181 PM

145 TASK$_7410_2         30-JUL-15 02.34.32.135013 PM   30-JUL-15 02.34.32.304761 PM

147 TASK$_7410_2         30-JUL-15 02.34.32.310140 PM   30-JUL-15 02.34.32.485545 PM

149 TASK$_7410_2         30-JUL-15 02.34.32.495971 PM   30-JUL-15 02.34.32.550955 PM

151 TASK$_7410_2         30-JUL-15 02.34.32.558335 PM   30-JUL-15 02.34.32.629274 PM

153 TASK$_7410_2         30-JUL-15 02.34.32.644917 PM   30-JUL-15 02.34.32.764337 PM

155 TASK$_7410_2         30-JUL-15 02.34.32.773029 PM   30-JUL-15 02.34.32.857794 PM

157 TASK$_7410_2         30-JUL-15 02.34.32.864875 PM   30-JUL-15 02.34.32.908799 PM

159 TASK$_7410_2         30-JUL-15 02.34.32.913982 PM   30-JUL-15 02.34.33.669704 PM

161 TASK$_7410_2         30-JUL-15 02.34.33.672077 PM   30-JUL-15 02.34.34.128170 PM

163 TASK$_7410_2         30-JUL-15 02.34.34.140102 PM   30-JUL-15 02.34.34.624627 PM

165 TASK$_7410_2         30-JUL-15 02.34.34.628145 PM   30-JUL-15 02.34.35.431037 PM

167 TASK$_7410_2         30-JUL-15 02.34.35.433282 PM   30-JUL-15 02.34.35.885741 PM
28 rows selected.

 

From these details there appears to be only two jobs processing the sub-tasks even though a parallel_level of 5 was specified.   Why is that? Not enough data to break it up more? Back end resourcing?????

Looking at the details of the query in a tkprof listing of one of the jobs’ trace files you can see that each of the jobs is executing the same plan that was generated with the original pl/sql: , however they are running two tasks in parallel

 

insert /*+PARALLEL APPEND */ into target_table
           select * from
source_table
             where rowid between :start_id and :end_id


call     count       cpu   elapsed       disk     query   current       rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse       1     0.00       0.00         0         0         0           0
Execute     6     0.26       3.03         4       599       8128       17849
Fetch        0     0.00       0.00         0         0         0           0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total       7     0.26       3.03         4       599       8128       17849

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
         0         0         0 LOAD AS SELECT (cr=59 pr=0 pw=8 time=92689 us)
     1240       1240       1240   FILTER (cr=13 pr=0 pw=0 time=501 us)
     1240       1240       1240   TABLE ACCESS BY ROWID RANGE SOURCE_TABLE (cr=13 pr=0 pw=0 time=368 us cost=42 size=122892 card=228)

 

I think that I could do more testing in this area and try and determine answers to the following questions

 

Why is the parallel parameter not appearing to work?

Does it depend on data volumes or back-end resources?

 

What I was interested in seeing was what was the breakdown of chunks by row count – was it an even split?

select a.chunk_id, count(*)

from user_parallel_execute_chunks a,

source_table b

where a.TASK_NAME = 'TASK$_107'

and b.rowid between a.START_ROWID and a.END_ROWID

group by a.chunk_id

order by 1;

  CHUNK_ID   COUNT(*)
---------- ----------
       108       1407
       109       1233
       110       1217
       111       1193
       112       1192
       113       1274
       114       1213
       115       1589
       116       1191
       117       1226
       118       1190
       119       1201
       120       1259
       121       1273
       122       1528
       123       1176
       124      15874
       125       4316
       126      15933
       127       4283
       128       9055

Reasonably even with most chunks selecting around 1200 rows each but 2 chunks had 15K in each so not that well chunked.

Repeating the process but chunking by blocks this time produced the following results

 

 

SQL> select task_name, chunk_id, dbms_rowid.rowid_block_number(start_rowid) Start_Id,

2 dbms_rowid.rowid_block_number(end_rowid) End_Id,

3 dbms_rowid.rowid_block_number(end_rowid) – dbms_rowid.rowid_block_number(start_rowid) num_blocks

4 from user_parallel_execute_chunks order by task_name, chunk_id;

 

TASK_NAME             CHUNK_ID   START_ID     END_ID NUM_BLOCKS

——————– ———- ———- ———- ———-

 

TASK$_133                   134     70096     70103         7

TASK$_133                   135     70104     70111         7

TASK$_133                   136     70112     70119         7

TASK$_133                   137     70120     70127         7

TASK$_133                   138     70128     70135         7

TASK$_133                   139     70136     70143         7

TASK$_133                   140     73344     73351        7

TASK$_133                   141     73352     73359         7

TASK$_133                   142     73360     73367         7

TASK$_133                   143     73368     73375         7

TASK$_133                   144     73376     73383        7

TASK$_133                   145     73384     73391         7

TASK$_133                   146     73392     73399         7

TASK$_133                   147     73400     73407         7

TASK$_133                   148     73408     73415         7

TASK$_133                   149     73416     73423         7

TASK$_133                   150     73472     73521         49

TASK$_133                   151     73522     73571         49

TASK$_133                   152     73572     73599         27

TASK$_133                   153     73600     73649         49

TASK$_133                   154     73650     73699         49

TASK$_133                   155     73700     73727         27

TASK$_133                   156     73728      73777         49

TASK$_133                   157     73778     73827         49

TASK$_133                   158     73828     73855         27

 

So again not very well split. Perhaps there needs to be more data to make it worthwhile.

 

Error handling:  

 

Finally we can have a look at how it handles errors. To save the reader time the simple answer is ‘not very well’.

Let’s force a duplicate key error to show how the package handles errors. The original 80000+ rows are left in the target_table, and the same set of data will be reinserted – which will cause a unique constraint violation for the PK:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name => DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

 

PL/SQL procedure successfully completed.

 

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

where status = ‘CREATED’;

 

TASK_NAME                               CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_170                               UNDELARED   CREATED

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_170’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => TRUE,-

chunk_size => 40000)

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := ‘insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id’;

 

dbms_parallel_execute.run_task(task_name     => ‘TASK$_170’,

sql_stmt       => v_sql,

language_flag => DBMS_SQL.NATIVE,

parallel_level => 10);

END;

/

 

PL/SQL procedure successfully completed.

 

select count(*) from target_table;

 

COUNT(*)

———-

86353

No error is thrown by the run_task call, but the number of rows has not increased. Checking the status of the task shows that there was indeed an error:

 

select task_name, status from user_parallel_execute_tasks where task_name = ‘TASK$_170’);

 

TASK_NAME                               STATUS

—————————————- ——————-

TASK$_170                               FINISHED_WITH_ERROR

The error details are given in the user_parallel_execute_chunks view:

 

select TASK_NAME, ERROR_MESSAGE, STATUS

from user_parallel_execute_chunks

where TASK_NAME = ‘TASK$_170’ order by 2

 

TASK_NAME       ERROR_MESSAGE                                               STATUS

————— ———————————————————— —————————————-

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170      ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170                                                                   PROCESSED

TASK$_170                                                                   PROCESSED

 

28 rows selected.

SQL> /

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_133’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => FALSE,-

chunk_size => 20);

5 Responses to “Using DBMS_PARALLEL_EXECUTE”

  1. […] ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are […]

  2. […] session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we […]

  3. samelstob said

    “Use a PARALLEL_ENABLE clause through a function”

    Could you explain this a little more. Do you mean change the PL/SQL block containing the query to a PL/SQL function and mark it as PARALLEL_ENABLE?

    • John Hallas said

      How To Enable Parallel Query For A Function? ( Doc ID 1093773.1 )

      http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_packages.htm#g1044585

      From the above documents and other information available, to enable parallel processing from PLSQL we have to use PARALLE_ENABLE clause through function.
      This can be used in combination with PQ/PX and other parallel hints. Only this will give hint to the optimizer to use parallelism.

      Or the other way is to use DBMS_PARALLEL_EXECUTE package to achieve parallelism. While when using DBMS_PARALLEL_EXECUTE there is no concept of PQ.
      Otherwise “force parallel dml” hint will have no impact in anonymous PLSQL block and procedures.

      • samelstob said

        I don’t know if Oracle support have something else in mind with this suggestion but it doesn’t seem applicable to this situation.

        PARALLEL_ENABLE is about calling a PL/SQL function from parallel SQL context. This is the opposite situation to the one here where a PDML SQL is being called from PL/SQL context.

        I wonder if a PL/SQL execute immediate of the PDML INSERT would work around the bug? Shame to have to implement hand-cranked parallelism with DBMS_PARALLEL_EXECUTE if it could be avoided.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: