Datapump and Traditional Export

Datapump

Datapump works as a server process,so it can directly access the datafiles and sga.no need of user session.

It can launch/stop/restrat at anytime.
STOP_Job
Kill_job


We can monitor datapump jobs from database.
Dba_datapump_sessions
V$SESSION_Longopps
DBA_DATAPUMP_JOBS
SQL> select JOB_NAME,STATE,DEGREE,DATAPUMP_SESSIONS from dba_datapump_jobs;

JOB_NAME
--------------------------------------------------------------------------------------------------------------------------------
STATE                              DEGREE DATAPUMP_SESSIONS
------------------------------ ---------- -----------------
SYS_EXPORT_SCHEMA_03
NOT RUNNING                             0                 0

It doesn't support tapes or pipes

More undo will be used.

Parallel execution used.

If any rowviolation(append or truncated)happens,the complete job will stop.

Meta data saved in XML format


Export

Export works as  user process
No self tunning utilities
It supports sequential media like tapes pipes.

Export uses less undo

Single stream execution.

If any rowviolation happens only that row will be eliminated and job continued for next jobs

It will save in ddl

**EXPDP PERFORMANCE TUNINING**

If normal expdp command executing 6hrs to take backup of schema(or any object)
filesize and parlell parameters are used to fastup the export of dump file by adding multiple os process.

after using those parameters it has been completed in 3hrs

EXAMPLE

Before Querry:
expdp system/SYSTEMPASSWORD@DATABASE_NAME DIRECTORY=DPUMP123 DUMPFILE=exp_XXXXXXX.dmp SCHEMAS=SCOTT_SCHEMA compression=all logfile=exp_XXXXXXXXXXX.log

After Querry:
expdp system/SYSTEMPASSWORD@DATABASE_NAME DIRECTORY=DPUMP123 DUMPFILE=exp_XXXXXXX_%U.dmp filesize=5000M parallel=4 SCHEMAS=SCOTT_SCHEMA compression=all logfile=exp_XXXXXXXXXXX.log
filesize




No comments: