Oracle DBA – A lifelong learning experience

Posts Tagged ‘expdp’

Resolving two errors on Datapump export – parameter cluster=N/Y

Posted by John Hallas on December 5, 2016

I attempt to export a table with 76,000 rows using a relatively standard datapump command

expdp schemas=ODI_TEMP cluster=y directory=MCB content=ALL dumpfile=OEC_%U.dmp include=table:"IN('ORDM_ERROR_COUNTS')" parallel=8 logfile=OEC.log

It fails with a fatal error and also produces a warning

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and OLAP options
Starting “ODI_TEMP”.”SYS_EXPORT_SCHEMA_03″:  odi_temp/******** schemas=ODI_TEMP cluster=y directory=MCB content=ALL dumpfile=OEC_%U.dmp include=table:IN(‘ORDM_ERROR_COUNTS’) parallel=8 logfile=OEC.log
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 41 MB
>>> DBMS_AW_EXP: Ignoring BIA_RTL.MRDW_SLS_MRGN_AW
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-31693: Table data object “ODI_TEMP”.”ORDM_ERROR_COUNTS” failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file “/zfs/exports2/mcb/dp/OEC_01.dmp” for write
ORA-19505: failed to identify file “/zfs/exports2/mcb/dp/OEC_01.dmp”
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
>>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbereYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeteYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeveYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbexeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbezeYXgU60IBArnYQ==$0 not AW$
ORA-31694: master table “ODI_TEMP”.”SYS_EXPORT_SCHEMA_03″ failed to load/unload
ORA-31617: unable to open dump file “/zfs/exports2/mcb/dp/OEC_01.dmp” for write
ORA-19505: failed to identify file “/zfs/exports2/mcb/dp/OEC_01.dmp”
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-19505: failed to identify file “/zfs/exports2/mcb/dp/OEC_01.dmp”
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

 

The fatal error is obviously a problem that needs resolving and I attempt that with all sorts of things, changing the directory permissions, ensuring I can write a file into the directory, trying a variety of directories, all ending with the same failure message. What is even more odd is that 2 of the 8 dump files I want are being created.

-rw-r-----+ 1 oracle dba            4096 Dec  3 14:58 OEC_01.dmp
-rw-r-----+ 1 oracle dba           24576 Dec  3 14:59 OEC_02.dmp
-rw-r--r--+ 1 oracle dba            2200 Dec  3 14:59 OEC.log

I must mention at this point that I am using an 8 node RAC cluster. You may have noticed the cluster=y parameter in my expdp command. I don’t normally ever enter that as it is the default anyway, I just used it to demonstrate the issue.

The answer is of course that the directory I am trying to write into does not exist across all 8 nodes. In fact I have jumped onto node 1 as I normally do and the directory does not actually exist on any of the other 7 nodes. So the failure message is entirely accurate but quite hard to actually track down. I re-run the command with cluster=N, indicating it will still attempt to write 8 parallel files but all on the current node.
Master table “ODI_TEMP”.”SYS_EXPORT_SCHEMA_04″ successfully loaded/unloaded
******************************************************************************
Dump file set for ODI_TEMP.SYS_EXPORT_SCHEMA_04 is:
/zfs/exports2/mcb/dp/OEC_01.dmp
/zfs/exports2/mcb/dp/OEC_02.dmp
Job “ODI_TEMP”.”SYS_EXPORT_SCHEMA_04″ successfully completed at Sat Dec 3 15:12:46 2016

Excellent – the job has worked. But it still only created 2 dump files, how can that be. I mentioned the table only had 76K rows in it so Oracle knows it is overkill to run 8 streams and tones it all down.

The warning error >>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$ is normally caused by objects in the dba recyclebin

Starting “SYS”.”SYS_EXPORT_SCHEMA_01″:  /******** AS SYSDBA schemas=ODI_TEMP_TST1 cluster=N directory= content=ALL dumpfile=JH%U.dmp include=table:IN(‘E$_DWB_POINTS_TRANSACTION’) parallel=4 logfile=jh.log
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 800 MB
>>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbereYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeteYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeveYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbexeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbezeYXgU60IBArnYQ==$0 not AW$
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
>>> DBMS_AW_EXP: Ignoring BIA_RTL_MCU1.MRDW_SLS_MRGN_AW
>>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbereYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeteYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeveYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbexeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbezeYXgU60IBArnYQ==$0 not AW$
. . exported “ODI_TEMP_TST1″.”E$_DWB_POINTS_TRANSACTION”  696.8 MB 3213090 rows
Master table “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

select count(*) from dba_recyclebin

--640 objects

purge dba_recyclebin;

re-run the export.  (Note the datapump command is different from the one at the beginning of the port. I just run this on a test environment just to prove the point and  to negate the need to raise a change)

 

Starting “SYS”.”SYS_EXPORT_SCHEMA_01″:  /******** AS SYSDBA schemas=ODI_TEMP_TST1 cluster=N directory= content=ALL dumpfile=JH%U.dmp include=table:IN(‘E$_DWB_POINTS_TRANSACTION’) parallel=4 logfile=jh.log
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 800 MB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “ODI_TEMP_TST1″.”E$_DWB_POINTS_TRANSACTION”  696.8 MB 3213090 rows
Master table “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully completed at Mon Dec 5 08:37:04 2016

 

All completed successfully. Having spent so much time trying to fix the parallel issue and having written this blog entry up, as an aide memoire to myself as much as anything else, I am unlikely to forget how to address ORA-31617 next time I see it.

 

Advertisements

Posted in Oracle | Tagged: , , , , , , | 1 Comment »

expdp to ASM and exclude table syntax – two workarounds

Posted by John Hallas on September 2, 2013

A short entry showing two problems that I have come across recently with datapump export and the  workarounds I used.

Using ASM disk for exports

Quite often it is hard to find sufficient space to export a large amount of data, especially if trying to use a filesystem. However with the growing use of ASM there is often plenty of disk space to be found in that area. The stumbling block is that the ASM area cannot hold trace files, audit files, alert logs, export files, tar files, and core files and therefore you get an error message even though you have created an export directory on the ASM disk group as requested. This is the same from 11GR1 through to 12C

I have created the directory and also created the folder within the ASM diskgroup

asmcmd –p

mkdir ‘+DATA/john_exp’

sqlplus

create directory john_exp as ‘‘+DATA/john_exp’  -- grant permissions if necessary

expdp schemas=system directory=john_exp dumpfile=john_exp.dmp content=metadata_only  logfile=john_exp.log

Export: Release 11.2.0.3.0 - Production on Mon Sep 2 13:13:59 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

ORA-39002: invalid operation

ORA-39070: Unable to open the log file.

ORA-29283: invalid file operation

ORA-06512: at "SYS.UTL_FILE", line 536

ORA-29283: invalid file operation

The trick is to use another, probably existing directory, such as DATA_PUMP_DIR which is placed on a filesystem  and point the logfile there, using the syntax ‘logfile=directory:logfile’. An alternative option is to use the expdp parameter nologfile=yes

expdp schemas=system directory=john_exp dumpfile=john_exp.dmp content=metadata_only  logfile=data_pump_dir:john_exp.log

Export: Release 11.2.0.3.0 - Production on Mon Sep 2 13:14:20 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

Starting "SYS"."SYS_EXPORT_SCHEMA_01":  /******** AS SYSDBA schemas=system directory=john_exp dumpfile=john_exp.dmp content=metadata_only logfile=data_pump_dir:john_exp.log

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

….

******************************************************************************

Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:

+DATA/john_exp/john_exp.dmp

Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 13:16:49

Syntax to exclude a table in a data pump export

After spending ages trying to get the syntax of an EXCLUDE table parameter working correctly from the command line of a Linux server I finally gave up and actually read the manual. There I found the following helpful advice :-

Some operating systems require that quotation marks on the command line be preceded by an escape character, such as the backslash. If the backslashes were not present, the command-line parser that Export uses would not understand the quotation marks and would remove them, resulting in an error. In general, Oracle recommends that you place such statements in a parameter file because escape characters are not necessary in parameter files.

My example was a schema export but we did not need a very large table to be included. This is the syntax which should work and indeed did when using a parameter file

expdp schemas=sm7 directory=data_pump_dir  dumpfile=sm7.dmp logfile=sm7 content=all exclude=table:”=’SYSATTACHMENT'”

LRM-00116: syntax error at 'table:' following '='

The working parameter file was called by expdp parfile =sm7.par

schemas=sm7

directory=data_pump_dir

content=all

dumpfile=sm7.dmp

logfile=sm7.log

exclude=table:”=’SYSATTACHMENT'”

For information this is the revised syntax that works from the command line. However I for one will be using the parameter file in future

expdp schemas=sm7 directory=data_pump_dir  dumpfile=sm7.dmp logfile=sm7 content=all exclude=table:\”=\’SYSATTACHMENT\’\”

Posted in ASM, Oracle | Tagged: , , , , , | Leave a Comment »

DBMS_DATAPUMP using the API

Posted by John Hallas on May 6, 2009

My challenge was to move a schema from one database to another, simple enough one would assume however I had several limitations. I could not use transportable tables because the charactersets
were different and I did not want to go through a conversion routine but the main limitation was the size (2Tb) and the fact that there were multiple partitions complete with four subpartitions per partition.
I decided to use export and a single file export took 15 hours and created a 700Mb datafile. The import was less than 50% through after 24 hours so I knew I needed a more efficient method.I had already arranged for a large filesystem to be mounted on the
source database and the idea was to export to that area and then get it remounted on the target server. This was to be repeated against the production database at a later date so it was worth getting everything scripted.

I generated 4 export parameter files to take out 3 big tables and ‘all the rest’

userid=xxxx
directory=movetbs
dumpfile=odsprod_odsalltherest.dmp
schemas=odsprod
exclude=TABLE:”IN (‘ODS_SLS_TRAN_HEADER’,’ODS_SLS_PAYMENT_DETAIL’,’ODS_SLS_ITEM_DETAIL’)”
#exclude=TABLE:”LIKE ‘ODS_TEMP_STAGE%'”
logfile=ALLTHEREST.log
content=METADATA_ONLY

userid=xxxx
directory=movetbs
dumpfile=odsprod_odsslscheck.dmp
schemas=odsprod
include=TABLE:”IN (‘ODS_SLS_CHECK_ERRORS’)”
logfile=ODS_SLS_CHECK_ERRORS.log
content=METADATA_ONLY

userid=xxxx
directory=movetbs
dumpfile=odsprod_odstran.dmp
schemas=odsprod
include=TABLE:”IN (‘ODS_SLS_TRAN_HEADER’)”
logfile=ODS_SLS_TRAN_HEADER.log

content=METADATA_ONLY

Note that the above examples have the content=metatdata_only parameter as I wanted to get everything working before I started the exports. I also used the I had already experimented with a command line syntax but discounted it because the line got too long and rather hard to manage. An example is here though

nohup expdp xxxx directory=movetbs dumpfile=odsprod_odstempstage.dmp schemas=odsprod include=TABLE:\”LIKE \(\’ODS_TEMP_STAGE%\’\)\” logfile=TEMPSTAGE.log &

 

So far so good but I then realised that the large table (ODS_SLS_ITEM_DETAIL) was over 4 billion rows in size and created a 600Gbfile by itself and was going to be the biggest bottleneck.I did some experimentation with a query parameter to the export but it took too long to scan the data and was not very efficient. Example provided

userid=xxxx
directory=movetbs
dumpfile=odsprod_odsitemdetquery.dmp
schemas=odsprod
include=TABLE:”IN (‘ODS_SLS_ITEM_DETAIL’)”
query=ODSPROD.ODS_SLS_ITEM_DETAIL:” where id_pk <= 5000000″
logfile=ODSITEMDETQUERY.log

I then decided to export by partition but could not find a way of specifying the partition list using expdp. That is probably because such an option does not exist even in 11.1.0.7 and I needed to use the dbms_datapump API. This was documented in Metalink note 470027.1. To cut to the chase I ended up with 4 export sql files which split up the partitions based on dates, 2008a, 2008b, 2009a,2009b

cat exp2009a.sql

declare
h1 number;

begin
h1 := dbms_datapump.open (operation => ‘EXPORT’,job_mode => ‘TABLE’);

dbms_datapump.add_file (handle => h1,
filename => ‘exp2009a%U.DMP’,
directory => ‘MOVETBS’,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

dbms_datapump.add_file (handle => h1,
directory => ‘MOVETBS’,
filename => ‘EXP2009a.log’,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

dbms_datapump.metadata_filter (handle => h1, name => ‘SCHEMA_EXPR’,
value => ‘IN (”ODSPROD”)’);

dbms_datapump.metadata_filter (handle => h1,
name => ‘NAME_EXPR’,
value => ‘IN (”ODS_SLS_ITEM_DETAIL”)’);

dbms_datapump.set_parallel( handle => h1, degree => 6);

dbms_datapump.data_filter (handle => h1,
name => ‘PARTITION_LIST’,
value => ”’ODS_SLS_ITEM_DETAIL_20090101”, ”ODS_SLS_ITEM_DETAIL_20090102”, more here ‘’ODS_SLS_ITEM_DETAIL_20090227”, ”ODS_SLS_ITEM_DETAIL_20090228”’,
table_name => ‘ODS_SLS_ITEM_DETAIL’,
schema_name => ‘ODSPROD’);

dbms_datapump.start_job (handle => h1);
dbms_datapump.detach (handle => h1);
end;
/

Running these 4 jobs in parallel as well as the 3 exports listed above I managed to get the data exported in ~2.5 hours which was a considerable improvement on the original 15 hours. I then moved the mountpoint onto the target server and ran impdb jobs as well as sql scripts to load the data.
Examples below
cat ALLTHEREST.imp

userid=xxxx
directory=movetbs
dumpfile=odsprod_odsalltherest.dmp
full=y
logfile=ALLTHERESTimp.log

cat imp2009a.imp

declare
k1 number;

begin
k1 := dbms_datapump.open (operation => ‘IMPORT’,job_mode => ‘TABLE’);

dbms_datapump.add_file (handle => k1,
filename => ‘exp2009a%U.DMP’,
directory => ‘MOVETBS’,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

dbms_datapump.add_file (handle => k1,
directory => ‘MOVETBS’,
filename => ‘IMP2009a.log’,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

dbms_datapump.metadata_filter (handle => k1, name => ‘SCHEMA_EXPR’,
value => ‘IN (”ODSPROD”)’);

dbms_datapump.metadata_filter (handle => k1,
name => ‘NAME_EXPR’,
value => ‘IN (”ODS_SLS_ITEM_DETAIL”)’);
dbms_datapump.set_parallel( handle => k1, degree => 6);
dbms_datapump.set_parameter (k1, ‘TABLE_EXISTS_ACTION’, ‘APPEND’);

dbms_datapump.data_filter (handle => k1,
name => ‘PARTITION_LIST’,
value => ”’ODS_SLS_ITEM_DETAIL_20090101”, ”ODS_SLS_ITEM_DETAIL_20090102”, MORE HERE ”ODS_SLS_ITEM_DETAIL_20090227”, ”ODS_SLS_ITEM_DETAIL_20090228”’,
table_name => ‘ODS_SLS_ITEM_DETAIL’,
schema_name => ‘ODSPROD’);

dbms_datapump.start_job (handle => k1);
dbms_datapump.detach (handle => k1);
end;
/

Note the additional line to ensure that the load is not skipped as the table already exists

dbms_datapump.set_parameter (k1, ‘TABLE_EXISTS_ACTION’, ‘APPEND’);

The total load seem to take around 6 hours now which has reduced from the estimated 48-60 hours that was seen initially. However I am seeing a lot of enq: TM contention as I have 6 worker processes for 4 imports, all running against the ODS_SLS_ITEM_DETAIL table. Still trying to work out how to address that and possible optimise the number of worker processes by changing the PARALLEL parameter. We have 16 cpus but I am more tempted to reduce rather than increase the workers to give less contention. However this load will only happen twice so perhaps I do not need to spend too much time fine-tuning.
I hope that this entry has been useful. I had never used the datapump API before so it was a learning curve for me.

PS This is useful script to show progress on the expdp/impdp jobs

set lines 120
col opname for a25 trunc
col username for a15 trunc
col target for a20
col sid for 999999
col serial# for 999999
col %DONE for a8
select b.username,a.sid,b.opname,b.target,round(b.SOFAR*100 / b.TOTALWORK,0) || ‘%’ as “%DONE”,
b.TIME_REMAINING,to_char(b.start_time,’YYYY/MM/DD HH24:MI:SS’) START_TIME
from V$SESSION_LONGOPS b,V$SESSION a where a.sid=b.sid and TIME_REMAINING 0 order by b.SOFAR/b.TOTALWORK;

Posted in Oracle | Tagged: , , , , , , | 2 Comments »

Where has consistent=y gone when using Datapump export

Posted by John Hallas on December 5, 2008

Well it is not needed anymore according to Oracle. Let’s take a look at a bit of history. From 9i, when flashback database came in, it was possible to export using a SCN number by setting the parameter flashback_scn or flashback_time. Below you can see the relevant options from a 10g exp help=y command

CONSISTENT cross-table consistency(N)

FLASHBACK_SCN SCN used to set session snapshot back to

FLASHBACK_TIME time used to get the SCN closest to the specified time

So both consistent=y and flashback options were available (provided you were using undo management).

Roll forward to 10g and onwards (and bear in mind that exp/imp are rumoured to be unsupported with 11g R2) and it appears obvious that using flashback is the way to go to get a consistent query/export using Datapump

Flashback_scn

There are at least 3 ways of getting the latest scn from a database

select current_scn from v$database;

select dbms_flashback.get_system_change_number from dual;

using oradebug on kcsgscn_ (for full details see http://www.eygle.com/archives/2006/01/oracle_howto_get_scn.html)

So if we want to run a large export whilst the database is in use then ideally we should always use one of the two flashback parameters, preferably called from a parameter file (my personal preference)

Flashback_time

There seems to be a few notes around on issues about the usage of this parameter and it seems to be tricky to get the parameter right first time

userid=system schemas=system dumpfile=test.dmp FLASHBACK_TIME=”to_timestamp(’05-DEC-2008 14:13:00′,’DD-MON-YYYY HH24:MI:SS’)”

There is an interesting note http://forums.oracle.com/forums/thread.jspa?threadID=623510 that talks about export consistency when using flashback_time

Here is an worked example of using flashback_time to copy a table as at a point in the past.

create table system.test as select * from dba_objects where rownum <101;

Table created.

SQL> select count(*) from test;

100

Export the table (100 rows) – noting the time.

expdp parfile=parfile.dat (userid=system/xxxx tables=system.test dumpfile=test.dmp)

Export: Release 11.1.0.7.0 – 64bit Production on Friday, 05 December, 2008 19:35:18

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: system/******** parfile=parfile.dat

Estimate in progress using BLOCKS method…

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 64 KB

Processing object type TABLE_EXPORT/TABLE/TABLE

. . exported “SYSTEM”.”TEST” 18.58 KB 100 rows

Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:

/app/oracle/admin/MDWDEV2A/dpdump/test.dmp

Job “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully completed at 19:35:37

Delete some rows

SQL> delete from system.test where rownum < 50;

49 rows deleted.

SQL> commit;

Commit complete.

SQL> select count(*) from system.test;

51

Now try the export again, using a flasback_time before the last export

$ rm test.dmp

$ expdp parfile=parfile.dat (userid=system tables=system.test dumpfile=test.dmp FLASHBACK_TIME=”to_timestamp(‘05-DEC-2008 19:35:00‘,’DD-MON-YYYY HH24:MI:SS’)”

Export: Release 11.1.0.7.0 – 64bit Production on Friday, 05 December, 2008 19:38:30

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: system/******** parfile=parfile.dat

Estimate in progress using BLOCKS method…

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 64 KB

Processing object type TABLE_EXPORT/TABLE/TABLE

. . exported “SYSTEM”.”TEST” 18.58 KB 100 rows

Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:

/app/oracle/admin/MDWDEV2A/dpdump/test.dmp

There are a lot of permutations and possibilities using the flashback technology and combining it with Datapump and I have only explored a small area here

Posted in Oracle | Tagged: , , , , | 4 Comments »