Where has consistent=y gone when using Datapump export
Posted by John Hallas on December 5, 2008
Well it is not needed anymore according to Oracle. Let’s take a look at a bit of history. From 9i, when flashback database came in, it was possible to export using a SCN number by setting the parameter flashback_scn or flashback_time. Below you can see the relevant options from a 10g exp help=y command
CONSISTENT cross-table consistency(N)
FLASHBACK_SCN SCN used to set session snapshot back to
FLASHBACK_TIME time used to get the SCN closest to the specified time
So both consistent=y and flashback options were available (provided you were using undo management).
Roll forward to 10g and onwards (and bear in mind that exp/imp are rumoured to be unsupported with 11g R2) and it appears obvious that using flashback is the way to go to get a consistent query/export using Datapump
Flashback_scn
There are at least 3 ways of getting the latest scn from a database
select current_scn from v$database;
select dbms_flashback.get_system_change_number from dual;
using oradebug on kcsgscn_ (for full details see http://www.eygle.com/archives/2006/01/oracle_howto_get_scn.html)
So if we want to run a large export whilst the database is in use then ideally we should always use one of the two flashback parameters, preferably called from a parameter file (my personal preference)
Flashback_time
There seems to be a few notes around on issues about the usage of this parameter and it seems to be tricky to get the parameter right first time
userid=system schemas=system dumpfile=test.dmp FLASHBACK_TIME=”to_timestamp(’05-DEC-2008 14:13:00′,’DD-MON-YYYY HH24:MI:SS’)”
There is an interesting note http://forums.oracle.com/forums/thread.jspa?threadID=623510 that talks about export consistency when using flashback_time
Here is an worked example of using flashback_time to copy a table as at a point in the past.
create table system.test as select * from dba_objects where rownum <101;
Table created.
SQL> select count(*) from test;
100
Export the table (100 rows) – noting the time.
expdp parfile=parfile.dat (userid=system/xxxx tables=system.test dumpfile=test.dmp)
Export: Release 11.1.0.7.0 – 64bit Production on Friday, 05 December, 2008 19:35:18
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: system/******** parfile=parfile.dat
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “SYSTEM”.”TEST” 18.58 KB 100 rows
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/app/oracle/admin/MDWDEV2A/dpdump/test.dmp
Job “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully completed at 19:35:37
Delete some rows
SQL> delete from system.test where rownum < 50;
49 rows deleted.
SQL> commit;
Commit complete.
SQL> select count(*) from system.test;
51
Now try the export again, using a flasback_time before the last export
$ rm test.dmp
$ expdp parfile=parfile.dat (userid=system tables=system.test dumpfile=test.dmp FLASHBACK_TIME=”to_timestamp(‘05-DEC-2008 19:35:00‘,’DD-MON-YYYY HH24:MI:SS’)”
Export: Release 11.1.0.7.0 – 64bit Production on Friday, 05 December, 2008 19:38:30
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: system/******** parfile=parfile.dat
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “SYSTEM”.”TEST” 18.58 KB 100 rows
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/app/oracle/admin/MDWDEV2A/dpdump/test.dmp
There are a lot of permutations and possibilities using the flashback technology and combining it with Datapump and I have only explored a small area here
Dan Zheng said
Please see my blog about this topic…
iphone said
Good article. Old exp and imp tools are still supported in 11gR2 ?
tinky2jed said
I’m not sure about exp in 11gR2, but imp still works. i used it to upgrade from 8i to 11g and it worked fine.
Stephen Wood said
With using the FLASHBACK_TIME parameter in expdp it’s worth noting that you can’t create a ‘before change’ export if the table definition have changed e.g you added a column otherwise you’ll see this in the export log.
. . exported “COMMUNE”.”REPORTING” 5.320 KB 1 rows
. . exported “COMMUNE”.”SOCIAL” 5.343 KB 3 rows
ORA-31693: Table data object “COMMUNE”.”EVENTS” failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01466: unable to read data – table definition has changed