INCLUDE and EXCLUDE:
INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
expdp system/password@db10g schemas=SCOTT directory=TEST_DIR
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4
COMPRESS
The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export/import.
When the EXCLUDE parameter is used, all objects except those specified by it will be included in the export/import.
The two parameters are mutually exclusive, so use the parameter that requires the least entries to give you the result you require. The basic syntax for both parameters is the same.
INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]
The following code shows how they can be used as command line parameters.
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
INCLUDE=TABLE,VIEW,PACKAGE:"LIKE '%API'"
INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"
INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"
expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'"
directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
EXCLUDE=SCHEMA:"LIKE 'SYS%'"
EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200',
EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200',
'APEX_PUBLIC_USER','ANONYMOUS')"
CONTENT and QUERY:
To capture the data without the metadata use the DATA_ONLY parameter value.
expdp system/password@db10g schemas=SCOTT directory=TEST_DIR
dumpfile=scott_data.dmp logfile=expdp.log content=DATA_ONLY
The QUERY parameter allows you to alter the rows exported from one or more tables. The following example does a full database export, but doesn't include the data for the EMP and DEPT tables
expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=full.dmp logfile=expdp_full.log query=SCOTT.EMP,SCOTT.DEPT:'"WHERE ROWNUM = 0"'
Flashback Exports:
The exp utility used the CONSISTENT=Y parameter to indicate the export should be consistent to a point in time. By default the expdp utility exports are only consistent on a per table basis. If you want all tables in the export to be consistent to the same point in time, you need to use the FLASHBACK_SCN or FLASHBACK_TIME parameter.
The FLASHBACK_TIME parameter value is converted to the approximate SCN for the specified time.
Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read. The same wildcard can be used during the import to allow you to reference multiple files.
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4
dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4
dumpfile=SCOTT_%U.dmp logfile=impdpSCOTT.log
COMPRESS
-----------------
Exp compress=y :
Exp compress=y :
Specifies how Export and Import manage the initial extent for table data.
The default, COMPRESS=y, causes Export to flag table data for consolidation into one initial extent upon Import. If extent sizes are large (for example, because of the PCTINCREASE parameter), the allocated space will be larger than the space required to hold the data.
If you specify COMPRESS=n, Export uses the current storage parameters, including the values of initial extent size and next extent size. The values of the parameters may be the values specified in the CREATE TABLE or ALTER TABLE statements or the values modified by the database system. For example, the NEXT extent size value may be modified if the table grows and if the PCTINCREASE parameter is nonzero.
w.r.to oracle
oracle calulates all currently allocated space and uses that as size
of the initial extent for the target object.
consider
originally initial=10m next=5m minextents=6.
segment size=10+(5*5)=35m
you export this with compress=y.
import it.
all the extents are consolidated into one.
you will see that,
initial=35,next=5 and minextents=6.
segmentsize=35+(5*5)=60m
Now probably your import may fail, you may not have allocated a 60m segment!.
Case 1: COMPRESS=Y
So the actual data in the table is only 4 MB but its occupying 7 MB.
Case 2: COMPRESS=n
The size of INITIAL extent has not been altered and the space occupied is equal to the total data in the table is 4MB
Exp Consistent=n :
-----------------------
Specifies whether or not Export uses the SET TRANSACTION READ ONLY statement to ensure that the data seen by Export is consistent to a single point in time and does not change during the execution of the exp command. You should specify CONSISTENT=y when you anticipate that other applications will be updating the target data after an export has started.
If you use CONSISTENT=n, each table is usually exported in a single transaction. However, if a table contains nested tables, the outer table and each inner table are exported as separate transactions. If a table is partitioned, each partition is exported as a separate transaction.
Therefore, if nested tables and partitioned tables are being updated by other applications, the data that is exported could be inconsistent. To minimize this possibility, export those tables at a time when updates are not being done.
Table 1-2 shows a sequence of events by two users: user1 exports partitions in a table and user2 updates data in that table.
Table 1-2 Sequence of Events During Updates by Two Users
TIme Sequence User1 User2
1
Begins export of TAB:P1
No activity
2
No activity
Updates TAB:P2
Updates TAB:P1
Commits transaction
3
Ends export of TAB:P1
No activity
4
Exports TAB:P2
No activity
If the export uses CONSISTENT=y, none of the updates by user2 are written to the export file.
If the export uses CONSISTENT=n, the updates to TAB:P1 are not written to the export file. However, the updates to TAB:P2 are written to the export file because the update transaction is committed before the export of TAB:P2 begins. As a result, the user2 transaction is only partially recorded in the export file, making it inconsistent.
If you use CONSISTENT=y and the volume of updates is large, the rollback segment usage will be large. In addition, the export of each table will be slower because the rollback segment must be scanned for uncommitted transactions.
Keep in mind the following points about using CONSISTENT=y:
CONSISTENT=y is unsupported for exports that are performed when you are connected as user SYS or you are using AS SYSDBA, or both.
Export of certain metadata may require the use of the SYS schema within recursive SQL. In such situations, the use of CONSISTENT=y will be ignored. Oracle Corporation recommends that you avoid making metadata changes during an export process in which CONSISTENT=y is selected.
To minimize the time and space required for such exports, you should export tables that need to remain consistent separately from those that do not.
For example, export the emp and dept tables together in a consistent export, and then export the remainder of the database in a second pass.
A "snapshot too old" error occurs when rollback space is used up, and space taken up by committed transactions is reused for new transactions. Reusing space in the rollback segment allows database integrity to be preserved with minimum space requirements, but it imposes a limit on the amount of time that a read-consistent image can be preserved.
If a committed transaction has been overwritten and the information is needed for a read-consistent view of the database, a "snapshot too old" error results.
To avoid this error, you should minimize the time taken by a read-consistent export. (Do this by restricting the number of objects exported and, if possible, by reducing the database transaction rate.) Also, make the rollback segment as large as possible.
Imp DIRECT=N
---------------------
Imp direct=n
Imp direct=n
Default: n
Specifies whether you use direct path or conventional path Export.
Specifies whether you use direct path or conventional path Export.
DIRECT=y
causes Export to extract data by reading the data directly, bypassing the SQL command-processing layer (evaluating buffer). This method can be much faster than a conventional path Export
Difference between Conventional path Export & Direct path Export
Conventional path Export.
Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file.
Direct path Export.
When using a direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file.
. The parameter DIRECT specifies whether you use the direct path Export (DIRECT=Y) or the conventional path Export (DIRECT=N).
You may be able to improve performance by increasing the value of the RECORDLENGTH parameter when you invoke a direct path Export. Your exact performance gain depends upon the following factors:
- DB_BLOCK_SIZE
- the types of columns in your table
- your I/O layout (the drive receiving the export file should be separate from the disk drive where the database files reside)
For example, invoking a Direct path Export with a maximum I/O buffer of 64kb can improve the performance of the Export with almost 50%. This can be achieved by specifying the additional Export parameters DIRECT and RECORDLENGTH.
LIMITATIONS
1) A Direct path Export does not influence the time it takes to Import the data. That is, an export file created using direct path Export or Conventional path Export, will take the same amount of time to Import.
2) You cannot use the DIRECT=Y parameter when exporting in transportable tablespace mode. You can use the DIRECT=Y parameter when exporting in full, user or table mode
3) The parameter QUERY applies ONLY to conventional path Export. It cannot be specified in a direct path export (DIRECT=Y).
4) A Direct path Export can only export the data when the NLS_LANG environment variable of the session who is invoking the export, is equal to the database character set. If NLS_LANG is not set (default is AMERICAN_AMERICA.US7ASCII) and/or NLS_LANG is different, Export will display the warning EXP-41 and abort with EXP-0.
causes Export to extract data by reading the data directly, bypassing the SQL command-processing layer (evaluating buffer). This method can be much faster than a conventional path Export
Difference between Conventional path Export & Direct path Export
Conventional path Export.
Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file.
Direct path Export.
When using a direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file.
. The parameter DIRECT specifies whether you use the direct path Export (DIRECT=Y) or the conventional path Export (DIRECT=N).
You may be able to improve performance by increasing the value of the RECORDLENGTH parameter when you invoke a direct path Export. Your exact performance gain depends upon the following factors:
- DB_BLOCK_SIZE
- the types of columns in your table
- your I/O layout (the drive receiving the export file should be separate from the disk drive where the database files reside)
For example, invoking a Direct path Export with a maximum I/O buffer of 64kb can improve the performance of the Export with almost 50%. This can be achieved by specifying the additional Export parameters DIRECT and RECORDLENGTH.
LIMITATIONS
1) A Direct path Export does not influence the time it takes to Import the data. That is, an export file created using direct path Export or Conventional path Export, will take the same amount of time to Import.
2) You cannot use the DIRECT=Y parameter when exporting in transportable tablespace mode. You can use the DIRECT=Y parameter when exporting in full, user or table mode
3) The parameter QUERY applies ONLY to conventional path Export. It cannot be specified in a direct path export (DIRECT=Y).
4) A Direct path Export can only export the data when the NLS_LANG environment variable of the session who is invoking the export, is equal to the database character set. If NLS_LANG is not set (default is AMERICAN_AMERICA.US7ASCII) and/or NLS_LANG is different, Export will display the warning EXP-41 and abort with EXP-0.
No comments:
Post a Comment