Showing posts with label ORACLE-DATABASE. Show all posts
Showing posts with label ORACLE-DATABASE. Show all posts

BLOCK CORRUPTION and BMR

 BLOCK CORRUPTION:

==================

DB maintaince may occur unfortunately and one of the case we can say block corruption.

whenever we find that we are unable to view the data as excepted. We can observer it is block corruption.

Block corruption may happen below scenario's:

1.Hard disk

2.Powerfailuers

3.N/W Issues.

4.OS Issues

 

Identify methods for block corruption:

1.DBA_BLOCKCORRUPTION

2.DBVERIFY='datafile location'(OS COMMANDS).

3.USING RMAN(Validate datafile, tablespace)


Block media recover :

To do Block Media Recover the database should be in mount or open state.

Connect to RMAN target database.

then fire Recover 

CHECKPOINT AND BEGIN BACKUP MODE USES

 CHECKPOINT

============

Checkpoint is mandatory background process which is requested by DBWR when its being writing data from buffer to data files.

It is used for synchronize the buffer and datafiles

Checkpoint only updates header information to data files whenever DBWR completes writing to datafiles.

Checkpoint update Highest SCN number to all Datafiles from buffer. 

Checkpoint occurs from bellow scenario's:

1.DB normal shutdown

2.DBA manual triggered checkpoint.(alter system checkpoint)

3.When its reaches three seconds or PGA_TARGET parameter value.

4.if we defined fast_mttr value then if it reaches fast_mttr.

5.Alter database begin backup mode;

6.when datafile goes offline.


Reorg tables&indexes in EBS and Normal DB

Case:

Usually in OLTP environment like EBS Applications, tables are often get fragmented due to multiple DML activities that happens.Fragmented tables cause queries on those tables to slow down. It is very important to de-fragment this table and to reclaim the fragmented space from these objects.

For EBS we have also seen  that usually gathered statistics, indexing and proper SQL tuning is plenty to improve and maintain acceptable performance but sometime it is required to reorg the table.

One primary cause of fragmentation is that when you run delete command on the tables it delete the rows but doesn’t frees up the memory and also do not changes the high water mark.

We have also seen that this requirement for doing reorg is more required in Demantra applications and since Demantra is both OLTP and data warehouse the applications we  must tune accordingly so that query run time can be optimum.

Although this article focus on the EBS/Demantra application tables but it is true for all oracle databases.

WHAT CAUSES FRAGMENTATION

As DML activity happens in the database, it is possible for there to be discontinuous chunks, or fragments of unused space within the tablespace and fragmentation within the table rows.

When you insert  or update row in table

As rows are added to tables, the table expands into unused space within the space. It will naturally fragment as discontiguous data blocks are fetched to receive new rows. Updating table records may also cause row chaining if the updated row can’t fit into same data block.

When you delete rows from table

At deletion, a table may coalesce extents, releasing unused space back into the tablespace. A lot of deletes leaves high-water mark behind at a high value. It will cause slower full-table-scan performance since Oracle must read to the high water mark.

WHY FRAGMENTATION IS BAD FOR DATABASE

Fragmentation can make a database run inefficiently.

a) Negative Performance impact – SQL  statements that performs full-scan and large index range scans may run more slowly in a fragmented table. When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.

b) Wasted Disk Space – It means you have space in your disk which your database can not use.

 

REORG PROCESS

The main goal of table reorganization is to reduce IO when accessing the big database tables.

1. Reorders the table data according to the primary key index.
2. Column reordering to push columns that have no data, nulls, to the end of the table row

The column reordering can be very useful for tables that have 300+ columns many of the columns are null. When the null columns are pushed to the end of the row, the read operation becomes streamlined thus increasing performance.

We usually follow below process for counter table fragmentation. We have also mentioned some good scripts related to data fragmentation at that end of this article.

 

STEP 1) GATHER STATISTICS

First you need to check exact difference in table actual size (dba_segments) and stats size (dba_tables). The difference between these value will report actual fragmentation to us. This means we need to have updated stats in the dba_tables for the tables.

To understand how we collect latest statistics in EBS, please see this earlier article Gather Statistics in R12 (and 11i)

 

STEP 2) CHECK FOR FRAGMENTATION

Execute Script 1 provided below to find the fragmented tables

It is important that you execute step 1 for gathering statistics first before you run this script or else result will be inaccurate.

This script will show you tables which are more fragmented. You can identify tables which are frequently used in your problematic long running queries and target those for reorg process.

Please note that it is not always a good idea to reorganize a partitioned table. Partitioning of data is considered an efficient data organization mechanism which boosts query performance.

 

STEP 3) REORG THE IDENTIFIED FRAGMENTED TABLES

We have multiple options to reorganize fragmented tables:

 METHOD 1. Alter table move (to another tablespace, or same tablespace) and rebuild indexes:-

 METHOD 2. Export and import the table

 METHOD 3. Shrink command . (applicable for tables which are tablespace with auto segment space management)

 

Method 1 is most popular and is described below:

 

METHOD 1. Alter table move

 

A) Check Table size and Fragmentation in table

It is good idea to check and record what is the current size and fragmentation in table using script 1 provided below

 

B) Collect indexes details

Execute below command to find the indexes details

select index_name,status from dba_indexes where table_name like '&table_name';

 

C) Move table in to same or new tablespace

For moving into same tablespace execute below:

alter table <table_name> move;

For moving into another tablespace, first find Current size of you table from dba_segments and check if any other tablespace has free space available

alter table <table_name> enable row movement;

alter table <table_name> move tablespace <new_tablespace_name>;

After that move back the table to original tablespace

alter table table_name move tablespace old_tablespace_name;

 

D) Rebuild all indexes

We need to rebuild all the indexes as move command will make all the index unusable. Run the alter index command one by one for each index.

select status,index_name from dba_indexes where table_name = '&table_name';

alter index <INDEX_NAME> rebuild online; 

select status,index_name from dba_indexes where table_name = '&table_name';

 

E) Gather table stats

For EBS application’s datbase we use FND_STATS package

exec fnd_stats.gather_table_stats('&owner_name','&table_name');

For normal oracle database, we use DBMS_STATS

exec dbms_stats.gather_table_stats('&owner_name','&table_name');

 

F) Check Table size and Fragmentation in table

Now again check table size using script 1.

In our case we were able to reduce the table size from 4 GB to 0.15 GB as the table was highly fragmented.

 It is also good idea to see if there are any new invalid objects in database and run utlrp.sql to compile objects.

 

IMPORTANT SCRIPTS

Some good scripts related to re-org:

Script 1: To locate highly fragmented tables

select

 table_name,round(((blocks*8)/1024/1024),2) "size (gb)" ,

 round(((num_rows*avg_row_len/1024))/1024/1024,2) "actual_data (gb)",

 round((((blocks*8)) - ((num_rows*avg_row_len/1024)))/1024/1024,2) "wasted_space (gb)",

 round(((((blocks*8)-(num_rows*avg_row_len/1024))/(blocks*8))*100 -10),2) "reclaimable space %",

 partitioned

from

 dba_tables

where

 (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))

order by 4 desc;

 

Script 2: To find how are data blocks used for a specific table

set serveroutput on

 

declare

 v_unformatted_blocks number;

 v_unformatted_bytes number;

 v_fs1_blocks number;

 v_fs1_bytes number;

 v_fs2_blocks number;

 v_fs2_bytes number;

 v_fs3_blocks number;

 v_fs3_bytes number;

 v_fs4_blocks number;

 v_fs4_bytes number;

 v_full_blocks number;

 v_full_bytes number;

 begin

 dbms_space.space_usage (

 'APPLSYS',

 'FND_CONCURRENT_REQUESTS',

 'TABLE',

 v_unformatted_blocks,

 v_unformatted_bytes,

 v_fs1_blocks,

 v_fs1_bytes,

 v_fs2_blocks,

 v_fs2_bytes,

 v_fs3_blocks,

 v_fs3_bytes,

 v_fs4_blocks,

 v_fs4_bytes,

 v_full_blocks,

 v_full_bytes);

 dbms_output.put_line('Unformatted Blocks = '||v_unformatted_blocks);

 dbms_output.put_line('Blocks with 00-25% free space = '||v_fs1_blocks);

 dbms_output.put_line('Blocks with 26-50% free space = '||v_fs2_blocks);

 dbms_output.put_line('Blocks with 51-75% free space = '||v_fs3_blocks);

 dbms_output.put_line('Blocks with 76-100% free space = '||v_fs4_blocks);

 dbms_output.put_line('Full Blocks = '||v_full_blocks);

 

end;

 /

 

This will give output like below:

Unformatted Blocks = 64

 Blocks with 00-25% free space = 0

 Blocks with 26-50% free space = 516

 Blocks with 51-75% free space = 282

 Blocks with 76-100% free space = 282

 Full Blocks = 10993

 PL/SQL procedure successfully completed.

 

Note

How to Deallocate Unused Space from a Table, Index or Cluster. (Doc ID 115586.1)
How to Determine Real Space used by a Table (Below the High Water Mark) (Doc ID 77635.1)
Reclaiming Unused Space in an E-Business Suite Instance Tablespace (Doc ID 303709.1)
How to Re-Organize a Table Online (Doc ID 177407.1)
Reorg Failiure : Demantra Reorg Failing On SALES_DATA (Doc ID 2209718.1)Demantra Table Reorganization, Fragmentation, Null Columns, Primary Key, Editioning, Cluster Factor, PCT Fee, Freelist, Initrans, Automatic Segment Management (ASM), Blocksize…. (Doc ID 1990353.1)
SEGMENT SHRINK and Details. (Doc ID 242090.1)

 

  

Startup Upgrade Mode

What does the "startup upgrade" command do?  How is the startup upgrade different from a normal startup?

Answer:  Starting in 10g, the "startup upgrade" command is used during upgrade procedures.  It differs from a normal startup because only certain operations are permitted. Once the database is started in upgrade mode, only queries on fixed views execute without errors until after the catctl.pl script is run.  Before running catctl.pl, queries on any other view or the use of PL/SQL returns an error.

Start the database in upgrade mode for a multitenant container database (CDB):

SQL> alter pluggable database all open upgrade;

For a non-CDB issue this startup command:

SQL> startup upgrade

Pre-upgrade checks include:

SQL> STARTUP UPGRADE
SQL> SPOOL pre_upgrade_check.log
SQL> @?/rdbms/admin/utlu111i.sql
SQL> SPOOL OFF

########################

[oracle3@servername admin]$ cat utlip.sql

Rem Copyright (c) 1998, 2007, Oracle. All rights reserved.

Rem

Rem   NAME

Rem     utlip.sql - UTiLity script to Invalidate Pl/sql

Rem

Rem   DESCRIPTION

Rem

Rem     *WARNING*   *WARNING*  *WARNING*  *WARNING*  *WARNING*  *WARNING*

Rem     Do not run this script directly.

Rem

Rem     utlip.sql is automatically executed when required for database

Rem     upgrades.

Rem     Use utlirp.sql if you are looking to invalidate and recompile

Rem     PL/SQL for a 32-bit to 64-bit conversion. Use dbmsupgnv.sql

Rem     to convert all PL/SQL to NATIVE or dbmsupgin.sql to convert all

Rem     PL/SQL to INTERPRETED.

Rem

Rem     *WARNING*   *WARNING*  *WARNING*  *WARNING*  *WARNING*  *WARNING*

################


Also will share one issue,when we noticed multiple packages or plsql objects are getting invalid frequently then we can acutlay do the below sinario.

Stratup upgrade-->run utlirp.sql to make all plsql objects are invalidate-->then stratup normal mode -->run utlrp.

it will fix the issue.


Shutting down database abnormally --- shut abort what happens inside

An instance(crash) failure occurs when your database isn’t able to shutdown normally.When this happens, your datafiles could be in an inconsistent state meaning they may not contain all committed changes and may contain uncommitted changes. Instance failures occur when the isntance terminates abnormally. A sudden power failure or a shutdown abort are two common causes of isntance failure.
Oracle uses crash recovery to return the database to a consistent committed state after an instance failure. Crash recovery guarntees that when your database is opened, it will contain only transactions that were committted before the instance failure occured. Oracle system monitor will automatically detect whether crash recovery is required.
Crash recovery has two pahses : rollforward and rollback
The system monitor will first roll forward and apply to the datafiles any transactions in the online redo files that occured after the most recent checkpoint. Crash Recovery uses redo information found in the online redo log files onlu . After rolling forward, Oracle will rollback any of those transactions that were never committed. Oracle uses information stored in the undo segments to rollback (undo) any uncommitted transactions.
When you start your database, Oracle uses the SCN information in the control files and datafiles headers to determin which one of the following will occur.
Starting up normally.
Performing crash Recovery
Determining that media reocvery is required.
On start up , Oracle check the instance thread status to determine whether crash recover is required . When the database is open for normal operations ,the thread status is OPEN. When Oracle us shut down normally , a checkpoint takes place and the instance thread status is set to CLOSED.
when your instance abnormally terminates the thread status remains OPEN because Oracle didn’t get a chance to upodate teh status to CLOSED.
On startup , When Oracle detects that an instance thread was abnormally left open, the system monitor process will automatically perform crash recovery.
The below query , it will usefull to find out whether crash recovery is required.
select a.thread#,b.open_mode,a.status,
CASE
WHEN((b.open_mode=’MOUNTED’) AND (a.status=’OPEN’)) THEN ‘Crash Recovery req.’
WHEN((b.open_mode=’MOUNTED’) AND (a.status=’CLOSED’)) THEN ‘No Crash Recovery Req.’
WHEN((b.open_mode=’READ WRITE’) AND (a.status=’OPEN’)) THEN ‘Instance already open’
ELSE ‘huh?’
END STATUS
FROM v$thread a,
v$database b,
v$instance c
where a.thread#=c.thread#;

REDOLOG VS ARCHIVE LOG


SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;

LOGFILENAME
--------------------------------------------------------------------------------
/u01/install/APPS/data/ebsdb/log1.dbf
/u01/install/APPS/data/ebsdb/log2.dbf
/u01/install/APPS/data/ebsdb/log3.dbf

ALTER SYSTEM SWITCH LOGFILE vs
ALTER SYSTEM ARCHIVE LOG CURRENT


hat is the difference between ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVE LOG CURRENT, and when do I use each?
Answer:  Yes, both ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVE LOG CURRENT will force a log switch, but they do it in different ways! 
Both the SWITCH LOGFILE and ARCHIVE LOG CURRENT write a quiesce checkpoint, a firm place whereby that last redo log is a part of the hot backup, but ARCHIVE LOG CURRENT waits for the writing to complete.  This can take several minutes for multi-gigabyte redo logs.
Conversely, the ALTER SYSTEM SWITCH LOGFILE command is very fast and returns control to the caller in less than a second while ALTER SYSTEM ARCHIVE LOG CURRENT pauses.
As we see below, the ALTER SYSTEM SWITCH LOGFILE is fast because it does not wait for the archiver process (ARCH) to complete writing the online redo log to the archivelog log filesystem:

  1. It issues database checkpoint
  2.  It immediately starts writing to the next redo log
  3.  In the background, the "switch logfile" command tells the ARCH background process to copy the "old" redo log file to the redo log filesystem.  
Here are the important differences between ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVE LOG CURRENT:
  •  RAC:  If you are running RAC, the ALTER SYSTEM ARCHIVE LOG CURRENTwill switch the logs on all RAC nodes (instances), whereasALTER SYSTEM SWITCH LOGFILE will only switch he logfile on the instance where you issue the switch command.  Hence, ALTER SYSTEM ARCHIVE LOG CURRENT is a best practice for RAC systems.


ENABLING ARCHIVELOG and SWITCHING LOG

SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/install/APPS/data/ebsdb/archive
Oldest online log sequence     7
Current log sequence           9
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size             553650296 bytes
Database Buffers         1577058304 bytes
Redo Buffers               13848576 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/install/APPS/data/ebsdb/archive
Oldest online log sequence     7
Next log sequence to archive   9
Current log sequence           9
SQL> alter switch logfile;
alter switch logfile
      *
ERROR at line 1:
ORA-00940: invalid ALTER command


SQL> ALTER SYSTEM SWITCH LOGFILE ;

System altered.

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/install/APPS/data/ebsdb/archive
Oldest online log sequence     8
Next log sequence to archive   10
Current log sequence           10
SQL> exit

Details of Adpreclone

adpreclone.pl dbtier

it will run the below two stages

1)Techstack

1)Database


Techstack:

it will $ORACLE_HOME/clone/Jlib -->directory it will create and update the all library files  (when we apply psu patches like that it will udpate the psu version related class files in this directory)

NOTE: if

-bash-3.2$ pwd
$FND_TOP/sql
-bash-3.2$ ls wfver.sql
wfver.sql   ---> this script will idetify the xmlparser version if psu related issues (need to be run from application node)


related to datafile and required for cloning

creates the driver files ORACLE_HOME/appsutil/driver/instconf.drv

convert the inventory file from binary to xml

$ORACLE_HOME/appsutil/clone/context/db/Sid_context.xml

example: it will save the binary information to xml as like below

</nls_settings>
      <oa_db_server>
         <dbhost oa_var="s_dbhost">xxxxxxp2</dbhost>
         <dbport oa_var="s_dbport" oa_type="PORT">1522</dbport>
         <dbtype oa_var="s_dbtype">VISION</dbtype>
         <dbcset oa_var="s_dbcset">UTF8</dbcset>
         <dbseed oa_var="s_dbseed">Fresh Install</dbseed>
         <dbcomp oa_var="s_dbcomp">oracle.apps.dbseed.fresh</dbcomp>
         <dbprocesses oa_var="s_db_processes">1500</dbprocesses>
         <SESSIONS oa_var="s_db_sessions">400</SESSIONS>
         <dbfiles oa_var="s_dbfiles">1500</dbfiles>
         <dbblockbuffers oa_var="s_dbblock_buffers">0</dbblockbuffers>
         <dbcachesize oa_var="s_dbcache_size">163577856</dbcachesize>
         <dbsharedpool oa_var="s_dbsharedpool_size">300000000</dbsharedpool>
         <dbrollbacksegs oa_var="s_db_rollback_segs">NOROLLBACK</dbrollbacksegs>
         <dbutilfiledir oa_var="s_db_util_filedir" osd="unix">/xxxxxxp2/app/comn/temp, /usr/tmp, /tmp</dbutilfiledir>
         <undotablespace oa_var="s_undo_tablespace">APPS_UNDOTS1</undotablespace>
         <APPS_DATA_FILE_DIR oa_var="s_db_data_file_dir">/xxxxxxp2/oracle/db/11.2.0/appsutil/outbound/xxxxxxp2_xxxxxxp2</APPS_DATA_FILE_DIR>
         <o7dictionaryaccess oa_var="s_o7_dictionary_accessibility">TRUE</o7dictionaryaccess>
         <db_walletdir oa_var="s_dbWalletDir">/xxxxxxp2/oracle/db/11.2.0/appsutil/wallet</db_walletdir>



Database:

creating datbase control file script/datafile location information file

at    $ORACLE_HOME/appsutil/template/adcrdbclone.sql, dbfinfo.lst

example:
$ cat dbfinfo.lst

param|s_undo_tablespace|APPS_UNDOTS1
param|s_database|db112
param|UNDO_MANAGEMENT|AUTO
param|isspfileexists|false
param|s_dbfiles|1500
log|7|/xxxxxxp2/dbdata/redo1/log07.dbf|1610612736
log|8|/xxxxxxp2/dbdata/redo2/log08.dbf|1610612736
log|9|/xxxxxxp2/dbdata/redo1/log09.dbf|1610612736
tmp|TEMP|/xxxxxxp2/dbdata/temp/temp04.dbf
tmp|TEMP|/xxxxxxp2/dbdata/temp/temp03.dbf
tmp|TEMP|/xxxxxxp2/dbdata/temp/temp02.dbf
tmp|TEMP|/xxxxxxp2/dbdata/temp/temp01.dbf
sys|SYSTEM|/xxxxxxp2/dbdata/data1/system04.dbf
sys|SYSTEM|/xxxxxxp2/dbdata/data1/system03.dbf
sys|SYSTEM|/xxxxxxp2/dbdata/data1/system02.dbf
sys|SYSTEM|/xxxxxxp2/dbdata/data1/system01.dbf
dat2|USERS|/xxxxxxp2/dbdata/data1/users01.dbf
dat2|OWAPUB|/xxxxxxp2/dbdata/data1/owad01.dbf
dat2|APPS_TS_TOOLS|/xxxxxxp2/dbdata/data1/apps_ts_tools01.dbf
dat2|xxxxxxSRX|/xxxxxxp2/dbdata/data1/xxxxxxSrx01.dbf
dat2|xxxxxxSRD|/xxxxxxp2/dbdata/data1/xxxxxxSrd01.dbf
dat2|DISCOVERER|/xxxxxxp2/dbdata/data1/discoverer01.dbf
dat2|APPS_UNDOTS1|/xxxxxxp2/dbdata/data1/undots09.dbf
dat2|APPS_UNDOTS1|/xxxxxxp2/dbdata/data1/undots08.dbf
dat2|APPS_UNDOTS1|/xxxxxxp2/dbdata/data1/undots07.dbf
sys|SYSAUX|/xxxxxxp2/dbdata/data1/sysaux01.dbf
dat2|APPS_TS_TX_DATA|/xxxxxxp2/dbdata/data1/APPS_TS_TX_DATA21.dbf
dat2|APPS_TS_MEDIA|/xxxxxxp2/dbdata/data1/APPS_TS_MEDIA03.dbf

example:#cat adcrdbclone.sql  -->because of this we can able to recreate controlfile
define arg1=&1
spool %s_db_oh%/appsutil/log/%s_contextname%/adcrdb_%s_dbSid%.txt
connect / as sysdba
shutdown abort
connect / as sysdba
startup nomount pfile=%s_db_oh%/dbs/init%s_dbSid%.ora;
CREATE CONTROLFILE REUSE SET DATABASE "%s_dbGlnam%"
LOGFILE
  GROUP 7 ('%s_dbhome2%/log07.dbf') SIZE 1610612736,
  GROUP 8 ('%s_dbhome2%/log08.dbf') SIZE 1610612736,
  GROUP 9 ('%s_dbhome2%/log09.dbf') SIZE 1610612736
DATAFILE
  '%s_dbhome1%/system05.dbf',
  '%s_dbhome1%/system04.dbf',
  '%s_dbhome1%/system03.dbf',
  '%s_dbhome1%/system02.dbf',
  '%s_dbhome1%/system01.dbf',
  '%s_dbhome1%/sysaux01.dbf',
  '%s_dbhome3%/ctxd01.dbf',
  '%s_dbhome4%/users01.dbf',
  '%s_dbhome4%/owad01.dbf',
  '%s_dbhome4%/apps_ts_tools01.dbf',
  '%s_dbhome4%/dresrx01.dbf',
  '%s_dbhome4%/dresrd01.dbf',
  '%s_dbhome4%/dresd02.dbf',
  '%s_dbhome4%/dresd01.dbf',
  '%s_dbhome4%/discoverer01.dbf',
  '%s_dbhome4%/APPS_TS_TX_IDX_LG06.dbf',
  '%s_dbhome4%/APPS_TS_TX_IDX_LG05.dbf',

creation of the database driver file

ORACLE_HOME/appsutil/clone/data/driver/data.drv

$ cat data.drv
#=====================================================================+
#  Copyright (c) 2002 Oracle Corporation Belmont, California, USA     |
#                          All rights reserved.                       |
#=====================================================================+
# FILENAME
#       /xxxxxxp2/oracle/db/11.2.0/appsutil/clone/data/driver/data.drv
# DESCRIPTION
#       Template file driver file for applying a clone using AutoClone
#=====================================================================*/
#
# <src top> <src dir> <src name> <file type> <dst dir> <dst name>
# =========================================================================

if installation-type db
  ad <s_clonestage>/data/stage adcrdb.zip UNZIP <s_db_oh>/appsutil/template
endif


Copy JDBC Libraries at ORACLE_HOME/appsutil/clone/jlib/classes12.jar and appsoui