Showing posts with label CLOUD WITH ORACLE APPS. Show all posts
Showing posts with label CLOUD WITH ORACLE APPS. Show all posts

Create DB SYSTEM (OCI)

 

Using the Console

To create a DB system

Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.

Click Create DB System.

On the Create DB System page, provide the basic information for the DB system:


Select a compartment: By default, the DB system is created in your current compartment and you can use the network resources in that compartment.

Name your DB system: A non-unique, display name for the DB system. An Oracle Cloud Identifier (OCID) uniquely identifies the DB system.

Select an availability domain: The availability domain  in which the DB system resides.

Select a shape type: The shape type you select sets the default shape and filters the shape options in the next field.

Select a shape: The shape determines the type of DB system created and the resources allocated to the system. To specify a shape other than the default, click Change Shape, and select an available shape from the list.

Configure the DB system: Specify the following:


Total node count: The number of nodes in the DB system, which depends on the shape you select. For virtual machine DB systems, you can specify either one or two nodes, except for VM.Standard2.1 and VM.Standard1.1, which are single-node DB systems.

Oracle Database software edition: The database edition supported by the DB system. For bare metal systems, you can mix supported database releases on the DB system to include older database versions, but not editions. The database edition cannot be changed and applies to all the databases in this DB system. Virtual machine systems support only one database.

CPU core count: Displays only for bare metal DB systems to allow you to specify the number of CPU cores for the system. (Virtual machine DB system shapes have a fixed number of CPU cores.) The text below the field indicates the acceptable values for that shape. For a multi-node DB system, the core count is evenly divided across the nodes.


 Note


After you provision the DB system, you can increase the CPU cores to accommodate increased demand. On a bare metal DB system, you scale the CPU cores directly. For virtual machine DB systems, you change the number of CPU cores by changing the shape.

Choose Storage Management Software: 1-node virtual machine DB systems only. Select Oracle Grid Infrastructure to use Oracle Automatic Storage Management (recommended for production workloads). Select Logical Volume Manager to quickly provision your DB system using Logical Volume Manager storage management software. Note that the Available storage (GB) value you specify during provisioning determines the maximum total storage available through scaling. The total storage available for each choice is detailed in the Storage Scaling Considerations for Virtual Machine Databases Using Fast Provisioning topic.


See Fast Provisioning Option for 1-node Virtual Machine DB Systems for more information about this feature.


Configure storage: Specify the following:


Available storage (GB): Virtual machine only. The amount of Block Storage in GB to allocate to the virtual machine DB system. Available storage can be scaled up or down as needed after provisioning your DB system.

Total storage (GB): Virtual machine only. The total Block Storage in GB used by the virtual machine DB system. The amount of available storage you select determines this value. Oracle charges for the total storage used.

Cluster name: (Optional) A unique cluster name for a multi-node DB system. The name must begin with a letter and contain only letters (a-z and A-Z), numbers (0-9) and hyphens (-). The cluster name can be no longer than 11 characters and is not case sensitive.

Data storage percentage: Bare metal only. The percentage (40% or 80%) assigned to DATA storage (user data and database files). The remaining percentage is assigned to RECO storage (database redo logs, archive logs, and recovery manager backups).

Add public SSH keys: The public key portion of each key pair you want to use for SSH access to the DB system. You can browse or drag and drop .pub files, or paste in individual public keys. To paste multiple keys, click + Another SSH Key, and supply a single key for each entry.

Choose a license type: The type of license you want to use for the DB system. Your choice affects metering for billing.


License Included means the cost of this Oracle Cloud Infrastructure Database service resource will include both the Oracle Database software licenses and the service.

Bring Your Own License (BYOL) means you will use your organization's Oracle Database software licenses for this Oracle Cloud Infrastructure Database service resource. See Bring Your Own License for more information.

Specify the network information:


Virtual cloud network: The VCN in which to create the DB system. Click Change Compartment to select a VCN in a different compartment.

Client Subnet: The subnet to which the DB system should attach. For 1- and 2-node RAC DB systems:  Do not use a subnet that overlaps with 192.168.16.16/28, which is used by the Oracle Clusterware private interconnect on the database instance. Specifying an overlapping subnet will cause the private interconnect to malfunction.

Click Change Compartment to select a subnet in a different compartment.


Network Security Groups: Optionally, you can specify one or more network security groups (NSGs) for your DB system. NSGs function as virtual firewalls, allowing you to apply a set of ingress and egress security rules to your DB system. A maximum of five NSGs can be specified. For more information, see Network Security Groups and Network Setup for DB Systems.


Note that if you choose a subnet with a security list, the security rules for the DB system will be a union of the rules in the security list and the NSGs.


Hostname prefix: Your choice of host name for the bare metal or virtual machine DB system. The host name must begin with an alphabetic character, and can contain only alphanumeric characters and hyphens (-). The maximum number of characters allowed for bare metal and virtual machine DB systems is 16.


 Important


The host name must be unique within the subnet. If it is not unique, the DB system will fail to provision.

Host domain name: The domain name for the DB system. If the selected subnet uses the Oracle-provided Internet and VCN Resolver for DNS name resolution, then this field displays the domain name for the subnet and it can't be changed. Otherwise, you can provide your choice of a domain name. Hyphens (-) are not permitted.

Host and domain URL: Combines the host and domain names to display the fully qualified domain name (FQDN) for the database. The maximum length is 64 characters.

Click Show Advanced Options to specify advanced options for the DB system:


Disk redundancy: For bare metal systems only. The type of redundancy configured for the DB system.

Normal is 2-way mirroring, recommended for test and development systems.

High is 3-way mirroring, recommended for production systems.

Fault domain: The fault domain(s) in which the DB system resides. You can choose which fault domain to use for your DB system. For two-node Oracle RAC DB systems, you can specify which two fault domains to use. Oracle recommends that you place each node of a two-node Oracle RAC DB system in a different fault domain. For more information on fault domains, see About Regions and Availability Domains.

Time zone: The default time zone for the DB system is UTC, but you can specify a different time zone. The time zone options are those supported in both the Java.util.TimeZone class and the Oracle Linux operating system. For more information, see DB System Time Zone.


 Tip


If you want to set a time zone other than UTC or the browser-detected time zone, and if you do not see the time zone you want, try selecting "Miscellaneous" in the Region or country list.


Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information about tagging, see Resource Tags. If you are not sure if you should apply tags, then skip this option (you can apply tags later) or ask your administrator.

After you complete the network configuration and specify any advanced options, click Next.

Provide information for the initial database:


Database name: The name for the database. The database name must begin with an alphabetic character and can contain a maximum of eight alphanumeric characters. Special characters are not permitted.

Database image: This controls the version of the initial database created on the DB system. By default, the latest available Oracle Database version is selected. You can also choose an older Oracle Database version, or choose a customized database software image that you have previously created in your current region with your choice of updates and one-off (interim) patches. See Oracle Database Software Images for information on creating and working with database software images.


To use an older Oracle-published software image:


Click Change Database Image.

In the Select a Database Software Image dialog, select Oracle-published Database Software Images.

In the Oracle Database Version list, check the version you wish to use to provision the initial database in your DB system. If you are launching a DB system with a virtual machine shape, you have option of selecting an older database version.


Display all available versions: Use this switch to include older database updates in the list of database version choices. When the switch is activated, you will see all available PSUs and RUs. The most recent release for each major version is indicated with "(latest)". See Availability of Older Database Versions for Virtual Machine DB Systems for more information.


 Note


Preview software versions: Versions flagged as "Preview" are for testing and subject to some restrictions. See Oracle Database Preview Version Availability for more information.


Click Select.

To use a user-created database software image:


Click Change Database Image.

In the Select a Database Software Image dialog, select Custom Database Software Images.

Select the compartment that contains your database software image.

Select the Oracle Database version that your database software image uses.

A list of database software images is displayed for your chosen Oracle Database version. Check the box beside the display name of the image you want to use.

After the DB system is active, you can create additional databases for bare metal systems. You can mix database versions on the DB system, but not editions. Virtual machine DB systems are limited to a single database.


PDB name: Not applicable to Oracle Database 11g (11.2.0.4). The name of the pluggable database. The PDB name must begin with an alphabetic character, and can contain a maximum of eight alphanumeric characters. The only special character permitted is the underscore ( _).

Create administrator credentials: A database administrator SYS user will be created with the password you supply.


Username: SYS

Password: Supply the password for this user. The password must meet the following criteria:


A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30 characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so on) or the word "oracle" either in forward or reversed order and regardless of casing.

Confirm password: Re-enter the SYS password you specified.

Select workload type: Choose the workload type that best suits your application:


Online Transactional Processing (OLTP) configures the database for a transactional workload, with a bias towards high volumes of random data access.

Decision Support System (DSS) configures the database for a decision support or data warehouse workload, with a bias towards large data scanning operations.

Configure database backups: Specify the settings for backing up the database to Object Storage:


Enable automatic backup: Check the check box to enable automatic incremental backups for this database. If you are creating a database in a security zone compartment, you must enable automatic backups.

Backup retention period: If you enable automatic backups, then you can choose one of the following preset retention periods: 7 days, 15 days, 30 days, 45 days, or 60 days. The default selection is 30 days.

Backup Scheduling: If you enable automatic backups, then you can choose a two-hour scheduling window to control when backup operations begin. If you do not specify a window, then the six-hour default window of 00:00 to 06:00 (in the time zone of the DB system's region) is used for your database. See Backup Scheduling for more information.

Click Show Advanced Options to specify advanced options for the initial database:


Character set: The character set for the database. The default is AL32UTF8.

National character set: The national character set for the database. The default is AL16UTF16.

Click Create DB System. The DB system appears in the list with a status of Provisioning. The DB system's icon changes from yellow to green (or red to indicate errors).


After the DB system's icon turns green, with a status of Available, you can click the highlighted DB system name to display details about the DB system. Note the IP addresses. You'll need the private or public IP address, depending on network configuration, to connect to the DB system.


Lavakumar

https://docs.cloud.oracle.com/en-us/iaas/Content/Database/Tasks/creatingDBsystem.htm



###############################################IMP POINT #############

console

under ORACLE DATABASE

SELECT BAREMATEL/VM/EXADATA

SELECT COMPARTMENT

SELECT SHAPE

UNDER SHAPE ONLY YOU CAN SET 

TOTAL NODE COUNT (single node have these VM STANDARD 2.1 and 1.1 )

ORACLE DB EDITION (on baremetal you select older vrsion RELEASE but not edition,DB edition cannot change its default,VM only support one DB)

CPU CORE COUNT (ONLY BARE METAL you specify cpu core,FOR VM you can only change Shape if you want change cpu core)

**AFTER CREATION DBSYSTEM WE CAN INCREASE/DECREASE DB SYSTEM **

Choose Storage Management:

You can select GRID ( recomended for PROD Workload and Logical Volume for fast DB system access)

Configure Storage have below options

Avilable Storage(only for VM)

Total storage(Only for VM and oracle charges for total storage)

clustername (to enter name not more than 11charcters)

Data Storage percentage(Only for Bare Metal,Userdata and Db files we can define this value,rest of all percentage goes to RECO)

Add Public SSH Keys:

Choose License Type: BYOL or select licence

Specify Network information:

VCN(we can select optionally different compartment vcn for multinode)

Network Security Group( for more ingress/exgress and security firewalls we can select max 5NSG)

Hostname prefix:The maximum number of characters allowed for bare metal and virtual machine DB systems is 16

Host domain name:by deafult subnet uses oracle provided internet and VCN resolver for DNS Name Resolution)

Host and domain URL: Combines the host and domain names to display the fully qualified domain name (FQDN) for the database. 

Disk redundancy: For bare metal systems only

Database name: The name for the database.

Database image: This controls the version of the initial database created on the DB system. By default, the latest available Oracle Database version is selected. You can also choose an older Oracle Database version

A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin.

Enable automatic backup: Check the check box to enable automatic incremental backups for this database

Backup retention period: If you enable automatic backups, then you can choose one of the following preset retention periods: 7 days, 15 days, 30 days, 45 days, or 60 days. The default selection is 30 days.

Backup Scheduling: If you enable automatic backups, then you can choose a two-hour scheduling window to control when backup operations begin. If you do not specify a window, then the six-hour default window of 00:00 to 06:00 (in the time zone of the DB system's region) is used for your database.

Character set: The character set for the database. The default is AL32UTF8

Click Create DB System. The DB system appears in the list with a status of Provisioning. The DB system's icon changes from yellow to green (or red to indicate errors).



Note that if you choose a subnet with a security list, the security rules for the DB system will be a union of the rules in the security list and the NSGs.


DBCLI


DBCLI COMMAND

=============

dbcli - It is a command line interface available on bare metal and virtual machine DB systems.

The database CLI commands must be run as the root user

dbcli is in the /opt/oracle/dcs/bin/ directory.

Oracle Database maintains logs of the dbcli command output in the dcscli.log and dcs-agent.log files in the /opt/oracle/dcs/log/ directory.

The database CLI commands use the following syntax:

EX::dbcli command [parameters]

command is a verb-object combination such as create-database.


parameters include additional options for the command. Most parameter names are preceded with two dashes, for example, --help. Abbreviated parameter names are preceded with one dash, for example, -h.


CLIADM

======

Use the cliadm update-dbcli command to update the database CLI with the latest new and updated commands.

Syntax:cliadm update-dbcli [-h] [-j]

h for help

j for json format

 Note:On RAC DB systems, execute the cliadm update-dbcli command on each node in the cluster.


AgentCommands
============
The following commands are available to manage agents: 

dbcli ping-agent
dbcli list-agentConfigParameters
dbcli update-agentConfigParameters

Clean/purge logs
================
The following commands are available to manage policies for automatic cleaning (purging) of logs.
dbcli create-autoLogCleanPolicy
dbcli list-autoLogCleanPolicy

Backup with dbcli
=================
Before you can back up a database by using the dbcli create-backup command, you'll need to:

Create a backup configuration by using the dbcli create-backupconfig command.
Associate the backup configuration with the database by using the dbcli update-database command.
After a database is associated with a backup configuration, you can use the dbcli create-backup command in a cron job to run backups automatically.

Commands for Backup:
dbcli create-backup
dbcli getstatus-backup
dbcli schedule-backup


Database Commands(The dbcli create-database command is available on bare metal DB systems only)
=================
The following commands are available to manage databases:

dbcli clone-database
dbcli create-database
dbcli delete-database
dbcli describe-database
dbcli list-databases
dbcli modify-database
dbcli recover-database
dbcli register-database
dbcli update-database

Objectstoreswift Commands
=========================
You can back up a database to an existing bucket in the Oracle Cloud Infrastructure Object Storage service by using the dbcli create-backup command, but first you'll need to:
Create an object store on the DB system, which contains the endpoint and credentials to access Object Storage, by using the dbcli create-objectstoreswift command.
Create a backup configuration that refers to the object store ID and the bucket name by using the dbcli create-backupconfig command.
Associate the backup configuration with the database by using the dbcli update-database command.
The following commands are available to manage object stores.

dbcli create-objectstoreswift
dbcli describe-objectstoreswift
dbcli list-objectstoreswifts



Objectstoreswift Commands
=========================
You can back up a database to an existing bucket in the Oracle Cloud Infrastructure Object Storage service by using the dbcli create-backup command, but first you'll need to:
Create an object store on the DB system, which contains the endpoint and credentials to access Object Storage, by using the dbcli create-objectstoreswift command.
Create a backup configuration that refers to the object store ID and the bucket name by using the dbcli create-backupconfig command.
Associate the backup configuration with the database by using the dbcli update-database command.
The following commands are available to manage object stores.

dbcli create-objectstoreswift
dbcli describe-objectstoreswift
dbcli list-objectstoreswifts


Rmanbackupreport Commands
=========================
The following commands are available to manage RMAN backup reports: 

dbcli create-rmanbackupreport
dbcli delete-rmanbackupreport
dbcli describe-rmanbackupreport
dbcli list-rmanbackupreports


Schedule Commands
=================
The following commands are available to manage schedules: 

dbcli describe-schedule
dbcli list-schedules
dbcli update-schedule
dbcli list-scheduledExecutions:Use the dbcli list-scheduledExecutions command to list scheduled executions.

Patching Commands:
==================
Use the dbcli update-server command to apply patches to the server components in the DB system. For more information about applying patches, see Patching a DB System.
dbcli update-server




 


TDE Commands
============
The following commands are available to manage TDE-related items (backup reports, keys, and wallets): 

dbcli list-tdebackupreports
dbcli update-tdekey
dbcli recover-tdewallet

Admin Commands
==============
The following commands are to perform administrative actions on the DB system:

dbadmcli manage diagcollect
dbadmcli power
dbadmcli power disk status
dbadmcli show controller
dbadmcli show disk
dbadmcli show diskgroup
dbadmcli show env_hw (environment type and hardware version) (environment type and hardware version)
dbadmcli show fs (file system details) (file system details)
dbadmcli show storage
dbadmcli stordiag


RDS -DBA TASKS

Doc:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.html

 


OCI-CPU-PATCHING

 Installing Database Patch Updates VM DB System in Oracle Cloud Infrastructure

=============================================================================

dbcli - It is a command line interface available on bare metal and virtual machine DB systems.


It is applied for:(when you choose DB SYSTEM only)

Oracle Database 19.0.0.0.0

Oracle Database 12.1.0.2

Oracle Database 11.2.0.4


Steps:

=====

1.We have to check patch update is available.

2.Prepare for installation of the patch update

3.APPLY Patch applied.

4.Post steps.


Step1:


When a patch update becomes available, it appears in the following locations for an Single Instance VM DB System:

Object Storage Service – dbcli 

Oracle Cloud Infrastructure DB Systems Console - BUT ORACLE RECOMENDS ONLY dbcli

For  install the latest cloud tooling update

cliadm update-dbcli






dbcli update-server --precheck





dbcli update-server

Updating DB HOME
 dbcli list-dbhomes



Apply Database Patch Update

rm -rf /tmp/datapatchoutput*





Note2360215:Oracle Database 19.0.0.0.0 Release Update (RU) or Oracle Database 12.1.0.2 Bundle Patches (BP) or Oracle Database 11.2.0.4 Patch Set Updates (PSU) are automatically included when you create a new Single Instance VM DB System.



OCI-APPS CLONE

OCI -CLONE(ONPREM/CLOUD)

EC2-EC2-CLONE

EC2 TO RDS (DATAPUMP)

 How To Import A Schema on Amazon RDS

As you know, there are two types of cloud services AWS provides (EC2 & RDS) while EC2 let you have the full control over the Operating System OS including root access, RDS doesn't give you any kind of OS access. Because RDS instance is managed by AWS they provide you a master admin user , this user has limited admin privileges (neither a SYSDBA nor DBA), making regular DBA tasks such as importing a schema a bit challenging.

Without having an OS access you won't be able to use commands like: exp ,expdp, imp, impdp and rman.


Below are the stAPPS how to import a schema into RDS using Oracle built-in packages. Luckily Oracle provides many built-in packages enable you to perform lots of tasks without the need to have an OS access.


Below is the Amazon document importing a schema into RDS:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html


Task Summary:

Export a schema with name "APPS_TRX" on an 11.2.0.3 database resides on AWS EC2 Linux instance and upload the export dump file to S3 bucket, then import the dump file into a 12.2.0.1 AWS RDS database along with changing the schema name to "APPS.


Prerequisites:

- An AWS S3 bucket must be created and Both Source EC2 and Target RDS must have RW access to it through a role. [S3 bucket is a kind of a shared storage between AWS cloud systems where you can upload/download the files to/from it, it will be used during this demo to transfer the export dump file between EC2 source instance and RDS target instance].


Step1: Export the schema on Source [EC2 instance]:

I've already an OS access to oracle user on the source EC2 instance so I used exportdata script to export APPS_TRX schema.


Note: In case you are importing from Enterprise Edition DB to Standard Edition DB make sure to reset all tables having COMPRESSION option enabled to NOCOMPRESS before exporting the data:

i.e.

alter table APPS_TRX.compressed_table NOCOMPRESS;


This is because Standard Edition doesn't have COMPRESSION feature. Otherwise the table creation will fail with ORA-39083 error during the import on the Standard Edition DB.



Step2: Upload the export file to S3 Bucket from Source [EC2 instance]:

In case the bucket is not yet configured on the source machine you can use the following AWSCLI command to configure it providing the bucket's "Access Key" and "Secret Access Key":


  # aws configure

  AWS Access Key ID [None]: XXXXXXXXXXXXXXXX

  AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXX

  Default region name [None]: 

  Default output format [None]: 


Note: The keys above are dummy ones, you have to put your own bucket key.


 Upload the export dump files to the S3 bucket:

  # cd /backup

  # aws s3 cp EXPORT_APPS_TRX_STG_04-03-19.dmp  s3://APPS-bucket


In case you are using S3 Browser from a Windows machine, configure the bucket using this flow:

Open S3 Browser -> Accounts -> Add New Account:

<you will use your bucket details here I'm just giving an example>

Account Name:   APPS-bucket

Account Type:   Amazon S3 Storage

Access Key ID:  ***********

Secret Access Key: ************

Click "Add New Account"

Accounts -> click "APPS-bucket" -> Click "Yes" to add 'External bucket' -> Bucket Name: "APPS-bucket"


Note: S3 Browser is a Windows GUI tool provided by AWS that help you deal with uploading/downlading the file to/from S3 bucket. you can download it from here:

https://s3browser.com/download.aspx


Step2: Download the export file from the S3 Bucket to the Target [RDS instance]:

Remember, there is no OS access on RDS, so we will connect to the database using any tools such as SQL Developer using the RDS master user credentials.


Use the AWS built-in package "rdsadmin.rdsadmin_s3_tasks" to download the dump file from S3 bucket to DATA_PUMP_DIR:


Warning: The following command will download all the files in the bucket, so make sure before running this command to remove all the files except the export dump files.


SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(

      p_bucket_name    =>  'APPS-bucket',       

      p_directory_name =>  'DATA_PUMP_DIR') 

   AS TASK_ID FROM DUAL; 


In case you have the export files stored under a specific directory, you can tell the download procedure to download all the files under that specific directory by using p_s3_prefix parameters like this: [don't forget the slash / after the directory name]


SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(

      p_bucket_name    =>  'APPS-bucket',      

      p_s3_prefix          =>  'export_files/', 

      p_directory_name =>  'DATA_PUMP_DIR') 

   AS TASK_ID FROM DUAL;


Or, in case you only want to download one named file at a time under a specific directory, just provide that file name as shown to p_prefix parameter:


SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(

      p_bucket_name    =>  'APPS-bucket',      

      p_s3_prefix          =>  'export_files',

      p_prefix                =>  'EXPORT_APPS_TRX_STG_04-03-19.dmp',

      p_directory_name =>  'DATA_PUMP_DIR')

   AS TASK_ID FROM DUAL; 


Above command will return a TASK ID:


TASK_ID                                                                        

--------------------------

1866786876865468-797  


Use that TASK_ID to monitor the download progress by running this statement:

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1866786876865468-797.log'));


In case you get this error:

ORA-00904: "RDSADMIN"."RDSADMIN_S3_TASKS"."DOWNLOAD_FROM_S3": invalid identifier


This means S3 integration is not configured with your RDS.

To configure S3 integration: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html


##################STEPS TO CREATE POLOCY,ROLE AND ASSIGN ROLE DB##############

FROM CONSOLE

============

Open the IAM Management Console: https://console.aws.amazon.com/iam/home?#/home

In the navigation pane, choose Policies -> Create policy On the Visual editor tab, choose Choose a service, and then choose S3 -> Check All S3 actions

Choose Resources, and choose Add ARN for the bucket -> Enter the Bucket name: APPS-bucket

Click Review Policy -> Give it a name "APPS-s3-integration" -> Create Policy

Associate your IAM role with your RDS DB:

Sign in to the AWS Management Console: https://console.aws.amazon.com/rds/

Choose the Oracle DB instance name -> On the Connectivity & security tab -> Manage IAM roles section:

IAM roles to this instance: -> "APPS-s3-integration"

Feature -> S3_INTEGRATION

Click "Add role"

Make sure that your database is running with "rds-s3-integration" option group parameters.

FROM CLI

========


POLICY CREATION:

The following AWS CLI command creates an IAM policy named rds-s3-integration-policy with these options. It grants access to a bucket named your-s3-bucket-arn.

aws iam create-policy \

   --policy-name rds-s3-integration-policy \

   --policy-document '{

     "Version": "2012-10-17",

     "Statement": [

       {

         "Sid": "s3integration",

         "Action": [

           "s3:GetObject",

           "s3:ListBucket",

           "s3:PutObject"

         ],

         "Effect": "Allow",

         "Resource": [

           "arn:aws:s3:::your-s3-bucket-arn", 

           "arn:aws:s3:::your-s3-bucket-arn/*"

         ]

       }

     ]

   }'                        

ROLE CREATION:

The following AWS CLI command creates the rds-s3-integration-role for this purpose.


aws iam create-role \

   --role-name rds-s3-integration-role \

   --assume-role-policy-document '{

     "Version": "2012-10-17",

     "Statement": [

       {

         "Effect": "Allow",

         "Principal": {

            "Service": "rds.amazonaws.com"

          },

         "Action": "sts:AssumeRole"

       }

     ]

   }'                            

ATTACH ROLE TO POLICY:

The following AWS CLI command attaches the policy to the role named rds-s3-integration-role.


aws iam attach-role-policy \

   --policy-arn your-policy-arn \

   --role-name rds-s3-integration-role                             

ADD ROLE TO DB INSTANCE:

The following AWS CLI command adds the role to an Oracle DB instance named mydbinstance.


aws rds add-role-to-db-instance \

   --db-instance-identifier mydbinstance \

   --feature-name S3_INTEGRATION \

   --role-arn your-role-arn                           

   

   

   

Once the download is complete, query the downloaded files under DATA_PUMP_DIR using this query:

select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;


Any file having "incomplete" keyword, means it still getting downloaded.


Now the AWS related tasks are done, let's jump to the import part which is purely Oracle's.


Step3: Create the tablespace and the target schema user on the Target [RDS instance]:

In case the target user does not yet exist on the target RDS database, you can go ahead and create it along with its tablespace.


-- Create a tablespace: [Using Oracle Managed Files OMF]

CREATE SMALLFILE TABLESPACE "TBS_APPS" DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;


-- In case you need to create a Password Verify Function on RDS:

Note: As you cannot create objects under SYS in RDS you have to use the following ready made procedure by AWS to create the Verify Function:

Note: The verify function name should contains one of these keywords: "PASSWORD", "VERIFY", "COMPLEXITY", "ENFORCE", or "STRENGTH"


begin

    rdsadmin.rdsadmin_password_verify.create_verify_function(

        p_verify_function_name     => 'CUSTOM_PASSWORD_VFY_FUNCTION',

        p_min_length                      => 8,

        p_max_length                     => 256,

        p_min_letters                      => 1,

        p_min_lowercase                => 1,

        p_min_uppercase                => 1,

        p_min_digits                       => 3,

        p_min_special                     => 2,

        p_disallow_simple_strings => true,

        p_disallow_whitespace       => true,

        p_disallow_username         => true,

        p_disallow_reverse             => true,

        p_disallow_db_name          => true,

        p_disallow_at_sign             => false);

end;

/

-- In case you want to create a new profile:

create profile APP_USERS limit

LOGICAL_READS_PER_SESSION DEFAULT

PRIVATE_SGA          DEFAULT

CPU_PER_SESSION         DEFAULT

PASSWORD_REUSE_TIME      DEFAULT

COMPOSITE_LIMIT         DEFAULT

PASSWORD_VERIFY_FUNCTION CUSTOM_PASSWORD_VFY_FUNCTION

PASSWORD_GRACE_TIME      DEFAULT

PASSWORD_LIFE_TIME     90

SESSIONS_PER_USER     DEFAULT

CONNECT_TIME         DEFAULT

CPU_PER_CALL         DEFAULT

FAILED_LOGIN_ATTEMPTS     6

PASSWORD_LOCK_TIME     DEFAULT

PASSWORD_REUSE_MAX     12

LOGICAL_READS_PER_CALL     DEFAULT

IDLE_TIME         DEFAULT;

 -- Create the user: [Here the user as per my business requirements will be different than the original user on the Source DB]

CREATE USER APPS IDENTIFIED  BY "test123" DEFAULT TABLESPACE TBS_APPS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON TBS_APPS PROFILE APP_USERS;

GRANT CREATE SESSION TO APPS;

GRANT CREATE JOB TO APPS;

GRANT CREATE PROCEDURE TO APPS;

GRANT CREATE SEQUENCE TO APPS;

GRANT CREATE TABLE TO APPS;


Step4: Import the dump file on the Target [RDS instance]:

Open a session from SQL Developer and make sure this session will not disconnect as far as the import is running, by the RDS master user execute the following block of code which will keep running in the foreground allowing you to monitor the import job on the fly and see any incoming errors:


DECLARE

  ind NUMBER;                      -- Loop index

  h1 NUMBER;                       -- Data Pump job handle

  percent_done NUMBER;     -- Percentage of job complete

  job_state VARCHAR2(30);  -- To keep track of job state

  le ku$_LogEntry;         -- For WIP and error messages

  js ku$_JobStatus;        -- The job status from get_status

  jd ku$_JobDesc;         -- The job description from get_status

  sts ku$_Status;            -- The status object returned by get_status

BEGIN


  h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);


-- Specify the single dump file and its directory   DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'EXPORT_APPS_TRX_STG_04-03-19.dmp');

-- Specify the logfile for the import process: [Very important to read it later after the completion of the import]  DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'import_APPS_TRX_STG_04-03-19.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);


-- Disable Archivelog for the import: [12c new feature]  DBMS_DATAPUMP.metadata_transform ( handle => h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);


-- REMAP SCHEMA:

--  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','APPS_TRX','APPS');

-- If a table already exists: [SKIP, REPLACE, TRUNCATE]

  DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');


-- REMAP TABLESPACE:  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','APPS','TBS_APPS');


-- Start the job. An exception is returned if something is not set up properly.  DBMS_DATAPUMP.START_JOB(h1);


-- The following loop will monitor the job until it get complete.meantime the progress information will be displayed:

 percent_done := 0;

  job_state := 'UNDEFINED';

  while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop

    dbms_datapump.get_status(h1,

           dbms_datapump.ku$_status_job_error +

           dbms_datapump.ku$_status_job_status +

           dbms_datapump.ku$_status_wip,-1,job_state,sts);

    js := sts.job_status;


-- If the percentage done changed, display the new value.     if js.percent_done != percent_done

    then

      dbms_output.put_line('*** Job percent done = ' ||

                           to_char(js.percent_done));

      percent_done := js.percent_done;

    end if;


-- If any work-in-progress (WIP) or Error messages were received for the job, display them.       if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)

    then

      le := sts.wip;

    else

      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)

      then

        le := sts.error;

      else

        le := null;

      end if;

    end if;

    if le is not null

    then

      ind := le.FIRST;

      while ind is not null loop

        dbms_output.put_line(le(ind).LogText);

        ind := le.NEXT(ind);

      end loop;

    end if;

  end loop;


-- Indicate that the job finished and gracefully detach from it.   dbms_output.put_line('Job has completed');

  dbms_output.put_line('Final job state = ' || job_state);

  dbms_datapump.detach(h1);

END;

/

In case you have used wrong parameters or bad combination e.g. using METADATA_FILTER instead of METDATA_REMAP when importing to a schema having a different name, you will get a bunch of errors similar to the below cute vague ones:


ORA-31627: API call succeeded but more information is available

ORA-06512: at "SYS.DBMS_DATAPUMP", line 7143

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79

ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932

ORA-06512: at "SYS.DBMS_DATAPUMP", line 7137


ORA-06512: at line 7


You can also monitor the execution of the import job using this query:

SQL> SELECT owner_name, job_name, operation, job_mode,DEGREE, state FROM dba_datapump_jobs where state='EXECUTING';


 In case you want to Kill the job: <Provide the '<JOB_NAME>','<OWNER>'>

 SQL> DECLARE

            h1 NUMBER;

        BEGIN

            h1:=DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','APPS');

            DBMS_DATAPUMP.STOP_JOB (h1, 1, 0);

         END;

  / 


Once the job is complete compare the number of objects between source and target DBs:

SQL> select object_type,count(*) from dba_objects where owner='APPS' group by object_type;


Also you can view the import log on RDS using this query:

SQL> set lines 10000 pages 0

           SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','import_APPS_TRX_STG_04-03-19.LOG'));           


Or: You can upload the log to S3 bucket and get it from there:

SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;


SQL> SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => '<bucket_name>', p_prefix => '<file_name>', prefix => '', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;

   

Run the After Import script that generated by exportdata script at Step 1 after replacing the original exported schema name APPS_TRX with the target imported schema name APPS.


Check the invalid objects:

SQL> col object_name for a45

select object_name,object_type,status from dba_objects where owner='APPS' and status<>'VALID';


Compile invalid object: [If found]

SQL> EXEC SYS.UTL_RECOMP.recomp_parallel(4, 'APPS');


Step5: [Optional] Delete the dump file from the Target [RDS instance]:

Check the exist files under DATA_PUMP_DIR directory:

SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

Generate delete script for all files:

SQL> select 'exec utl_file.fremove(''DATA_PUMP_DIR'','''||filename||''');' from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

  Run the output script:

  e.g. exec utl_file.fremove('DATA_PUMP_DIR','EXPORT_APPS_TRX_STG_04-03-19.dmp');


For more reading on a similar common DBA tasks on RDS:

http://dba-tips.blogspot.com/2020/02/the-dba-guide-for-managing-oracle.html


References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html

S3 Bucket creation:

https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html

DBMS_DATAPUMP:

https://docs.oracle.com/database/121/ARPLS/d_datpmp.htm#ARPLS356

RDS Master Admin User:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.MasterAccounts.html

Import  


REALTIME ISSUES - ORACLECLOUD

****DCS-10045:Validation error encountered: Backup type is invalid.****
CONSOLE ERROR:

Status: The database restore operation failed. Run 'dbcli list-jobs' on all hosts in the DB system to check for problems and fix them before rerunning the operation.

[root@testbackupdb ~]# dbcli list-jobs

ID                                       Description                                                                 Created                             Status
---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
7182911b-1733-4805-b351-081f6be6605b     Authentication key update for DCS_ADMIN                                     May 5, 2019 4:09:58 PM UTC          Success
ce8f602b-84c7-4295-982a-49098dca465d     Provisioning service creation                                               May 5, 2019 4:11:41 PM UTC          Success
47876bcb-50a5-42ac-9648-52221a12a974     SSH keys update                                                             May 5, 2019 5:03:22 PM UTC          Success
9d01bc5f-0831-48ed-a1fc-f0982e54feaa     SSH key delete                                                              May 5, 2019 5:05:14 PM UTC          Success
80f9b24e-1061-4906-88d1-78f6b824b454     create object store:b9a7iawWuBfGYFaNH7MC                                    May 6, 2019 3:14:45 AM UTC          Success
25845fda-ea88-40b0-bb54-3be86ec265f6     create backup config:b9a7iawWuBfGYFaNH7MC_BC                                May 6, 2019 3:15:31 AM UTC          Success
64903937-7519-415b-b658-186cc6bebbdd     update database : BKUPDB                                                    May 6, 2019 3:16:23 AM UTC          Success
f3d0309f-3bf0-4ec2-b3d7-2d2fcde8599f     Server Patching                                                             May 6, 2019 3:19:53 AM UTC          Success
4fa3b016-8448-4535-b901-a5edd6dc2c1c     Create Regular-L0 Backup with TAG-DBTRegular-L01557111640379VSL for Db:BKUPDB in OSS:b9a7iawWuBfGYFaNH7MC May 6, 2019 3:19:56 AM UTC          Success
5e902ca8-f332-4c7f-86ca-afca1590e4b9     Delete Backup for Database name: BKUPDB_fra1k8                              May 6, 2019 3:26:48 AM UTC          Success
251ba542-234e-40f6-a7a9-f3a1ef281cb8     DB Home Prechecks                                                           May 6, 2019 5:05:43 AM UTC          Success
bc5eb7cd-b320-453f-8dea-e296e645df58     Create Longterm Backup with TAG-DBTLongterm1557125463725Zmf for Db:BKUPDB in OSS:b9a7iawWuBfGYFaNH7MC May 6, 2019 6:52:12 AM UTC          Success
43a924a2-6e16-43e0-a2e9-cc4cd1d6b53f     Delete Backup for Database name: BKUPDB_fra1k8                              May 6, 2019 7:05:18 AM UTC          Success
eefab9b7-2b32-4327-9eff-bf55d99ca7eb     Create recovery-pitr : time '05/06/2019 07:10:08' for db : BKUPDB           May 6, 2019 7:10:57 AM UTC          Failure

[root@testbackupdb ~]# dbcli describe-job  --jobid eefab9b7-2b32-4327-9eff-bf55d99ca7eb

Job details
----------------------------------------------------------------
                     ID:  eefab9b7-2b32-4327-9eff-bf55d99ca7eb
            Description:  Create recovery-pitr : time '05/06/2019 07:10:08' for db : BKUPDB
                 Status:  Failure
                Created:  May 6, 2019 7:10:57 AM UTC
                Message:  DCS-10001:Internal error encountered: Failed to run RMAN command. Please refer log at location : testbackupdb: /opt/oracle/dcs/log/testbackupdb/rman/bkup/BKUPDB_fra1k8/rman_restore_2019-05-06_07-12-23-4741329873884168661.log.Failed to do restore validati

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
task:TaskZLockWrapper_7173               May 6, 2019 7:11:07 AM UTC          May 6, 2019 7:12:37 AM UTC          Failure
task:TaskSequential_7174                 May 6, 2019 7:11:07 AM UTC          May 6, 2019 7:12:37 AM UTC          Failure
Database recovery validation             May 6, 2019 7:11:08 AM UTC          May 6, 2019 7:12:37 AM UTC          Failure

[
ISSUE is #####rman log error#showing inorder recovery of scn#####

validation succeeded for archived log
recovery will be done up to SCN 1761481
Media recovery start SCN is 1760987
Recovery must be done beyond SCN 1761012 to clear datafile fuzziness
could not locate pieces of backup set key 20
validation succeeded for backup piece
Finished restore at 2019/05/06 07:12:36

Recovery Manager complete.

we have used until time "05/06/2019 07:10:08"


SQL> select sequence#,first_change#,next_change# from v$log order by 1;

 SEQUENCE# FIRST_CHANGE# NEXT_CHANGE#
---------- ------------- ------------
        18       1761147      1761155
        19       1761155      1761444
        20       1761444   2.8147E+14


FIX :so i restored using until scn 1761444 

[root@testbackupdb ~]# dbcli list-jobs

ID                                       Description                                                                 Created                             Status
---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
7182911b-1733-4805-b351-081f6be6605b     Authentication key update for DCS_ADMIN                                     May 5, 2019 4:09:58 PM UTC          Success
ce8f602b-84c7-4295-982a-49098dca465d     Provisioning service creation                                               May 5, 2019 4:11:41 PM UTC          Success
47876bcb-50a5-42ac-9648-52221a12a974     SSH keys update                                                             May 5, 2019 5:03:22 PM UTC          Success
9d01bc5f-0831-48ed-a1fc-f0982e54feaa     SSH key delete                                                              May 5, 2019 5:05:14 PM UTC          Success
80f9b24e-1061-4906-88d1-78f6b824b454     create object store:b9a7iawWuBfGYFaNH7MC                                    May 6, 2019 3:14:45 AM UTC          Success
25845fda-ea88-40b0-bb54-3be86ec265f6     create backup config:b9a7iawWuBfGYFaNH7MC_BC                                May 6, 2019 3:15:31 AM UTC          Success
64903937-7519-415b-b658-186cc6bebbdd     update database : BKUPDB                                                    May 6, 2019 3:16:23 AM UTC          Success
f3d0309f-3bf0-4ec2-b3d7-2d2fcde8599f     Server Patching                                                             May 6, 2019 3:19:53 AM UTC          Success
4fa3b016-8448-4535-b901-a5edd6dc2c1c     Create Regular-L0 Backup with TAG-DBTRegular-L01557111640379VSL for Db:BKUPDB in OSS:b9a7iawWuBfGYFaNH7MC May 6, 2019 3:19:56 AM UTC          Success
5e902ca8-f332-4c7f-86ca-afca1590e4b9     Delete Backup for Database name: BKUPDB_fra1k8                              May 6, 2019 3:26:48 AM UTC          Success
251ba542-234e-40f6-a7a9-f3a1ef281cb8     DB Home Prechecks                                                           May 6, 2019 5:05:43 AM UTC          Success
bc5eb7cd-b320-453f-8dea-e296e645df58     Create Longterm Backup with TAG-DBTLongterm1557125463725Zmf for Db:BKUPDB in OSS:b9a7iawWuBfGYFaNH7MC May 6, 2019 6:52:12 AM UTC          Success
43a924a2-6e16-43e0-a2e9-cc4cd1d6b53f     Delete Backup for Database name: BKUPDB_fra1k8                              May 6, 2019 7:05:18 AM UTC          Success
eefab9b7-2b32-4327-9eff-bf55d99ca7eb     Create recovery-pitr : time '05/06/2019 07:10:08' for db : BKUPDB           May 6, 2019 7:10:57 AM UTC          Failure
eab18e5c-ea75-41dd-a4ed-98aa209f2132     Create detailed Backup Report                                               May 7, 2019 2:16:37 AM UTC          Success
f22341d1-d835-4828-8bf3-b1d8f0158c9b     Create recovery-latest for db : BKUPDB                                      May 7, 2019 2:45:07 AM UTC          Failure
65f2fcf5-776c-4630-ab8f-76975ab0b935     Create recovery-scn : scn 1761444 for db : BKUPDB                           May 7, 2019 2:57:46 AM UTC          Success




[root@testbackupdb ~]# dbcli describe-job --jobid 65f2fcf5-776c-4630-ab8f-76975ab0b935

Job details
----------------------------------------------------------------
                     ID:  65f2fcf5-776c-4630-ab8f-76975ab0b935
            Description:  Create recovery-scn : scn 1761444 for db : BKUPDB
                 Status:  Running
                Created:  May 7, 2019 2:57:46 AM UTC
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database recovery validation             May 7, 2019 2:57:58 AM UTC          May 7, 2019 2:59:37 AM UTC          Success
Database recovery                        May 7, 2019 2:59:38 AM UTC          May 7, 2019 2:59:38 AM UTC          Running

[root@testbackupdb ~]# dbcli describe-job --jobid 65f2fcf5-776c-4630-ab8f-76975ab0b935

Job details
----------------------------------------------------------------
                     ID:  65f2fcf5-776c-4630-ab8f-76975ab0b935
            Description:  Create recovery-scn : scn 1761444 for db : BKUPDB
                 Status:  Success
                Created:  May 7, 2019 2:57:46 AM UTC
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database recovery validation             May 7, 2019 2:57:58 AM UTC          May 7, 2019 2:59:37 AM UTC          Success
Database recovery                        May 7, 2019 2:59:38 AM UTC          May 7, 2019 3:02:47 AM UTC          Success
Enable block change tracking             May 7, 2019 3:02:47 AM UTC          May 7, 2019 3:02:50 AM UTC          Success
Database opening                         May 7, 2019 3:02:50 AM UTC          May 7, 2019 3:03:44 AM UTC          Success
Database restart                         May 7, 2019 3:03:44 AM UTC          May 7, 2019 3:05:06 AM UTC          Success
Recovery metadata persistance            May 7, 2019 3:05:06 AM UTC          May 7, 2019 3:05:06 AM UTC          Success

[root@testbackupdb ~]#

ADDING BLOCK VOLUME to OCI COMPUTE INSTACE

login as: opc
Authenticating with public key "rsa-key-20190505"
Last login: Sun May  5 09:02:03 2019 from 157.44.132.21
[opc@testclouddb ~]$ sudo su -
Last login: Sun May  5 09:22:53 GMT 2019 on pts/2
[root@testclouddb ~]# clear
[root@testclouddb ~]# adding mountpoing
-bash: adding: command not found
[root@testclouddb ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.2G     0  7.2G   0% /dev
tmpfs           7.3G     0  7.3G   0% /dev/shm
tmpfs           7.3G  8.8M  7.3G   1% /run
tmpfs           7.3G     0  7.3G   0% /sys/fs/cgroup
/dev/sda3        39G  1.9G   37G   5% /
/dev/sda1       200M  9.7M  191M   5% /boot/efi
tmpfs           1.5G     0  1.5G   0% /run/user/1000
/dev/sdb       1008G   72M  957G   1% /u01
tmpfs           1.5G     0  1.5G   0% /run/user/0
[root@testclouddb ~]# adding mount u02
-bash: adding: command not found
[root@testclouddb ~]# copying iscsi commands which is avilable in instance -->attach block volume -->details-->iscsi commands
-bash: copying: command not found
[root@testclouddb ~]# sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:8c23597f-f1e4-4219-a4b7-60107312a478 -p 169.254.2.3:3260
New iSCSI node [tcp:[hw=,ip=,net_if=,iscsi_if=default] 169.254.2.3,3260,-1 iqn.2015-12.com.oracleiaas:8c23597f-f1e4-4219-a4b7-60107312a478] added
[root@testclouddb ~]# sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:8c23597f-f1e4-4219-a4b7-60107312a478 -n node.startup -v automatic
[root@testclouddb ~]# sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:8c23597f-f1e4-4219-a4b7-60107312a478 -p 169.254.2.3:3260 -l
Logging in to [iface: default, target: iqn.2015-12.com.oracleiaas:8c23597f-f1e4-4219-a4b7-60107312a478, portal: 169.254.2.3,3260] (multiple)
Login to [iface: default, target: iqn.2015-12.com.oracleiaas:8c23597f-f1e4-4219-a4b7-60107312a478, portal: 169.254.2.3,3260] successful.
[root@testclouddb ~]#
[root@testclouddb ~]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 50.0 GB, 50010783744 bytes, 97677312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disk label type: gpt
Disk identifier: E612AFD4-C2DB-46D5-AB2D-C7BAD48E71FA


#         Start          End    Size  Type            Name
 1         2048       411647    200M  EFI System      EFI System Partition
 2       411648     17188863      8G  Linux swap
 3     17188864     97675263   38.4G  Microsoft basic

Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes


Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes

[root@testclouddb ~]# mkdir /u02
[root@testclouddb ~]# mkfs /dev/sdc
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) n
[root@testclouddb ~]# mount /dev/sdc /u02
mount: unknown filesystem type '(null)'
[root@testclouddb ~]# mount /dev/xvdf1 /test
mount: mount point /test does not exist
[root@testclouddb ~]# mkfs /dev/sdc
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=256 blocks
67108864 inodes, 268435456 blocks
13421772 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
8192 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information:
done

[root@testclouddb ~]#
[root@testclouddb ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.2G     0  7.2G   0% /dev
tmpfs           7.3G     0  7.3G   0% /dev/shm
tmpfs           7.3G  8.7M  7.3G   1% /run
tmpfs           7.3G     0  7.3G   0% /sys/fs/cgroup
/dev/sda3        39G  1.9G   37G   5% /
/dev/sda1       200M  9.7M  191M   5% /boot/efi
tmpfs           1.5G     0  1.5G   0% /run/user/1000
/dev/sdb       1008G   72M  957G   1% /u01
[root@testclouddb ~]# mount /dev/sdc /u02
[root@testclouddb ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.2G     0  7.2G   0% /dev
tmpfs           7.3G     0  7.3G   0% /dev/shm
tmpfs           7.3G  8.7M  7.3G   1% /run
tmpfs           7.3G     0  7.3G   0% /sys/fs/cgroup
/dev/sda3        39G  1.9G   37G   5% /
/dev/sda1       200M  9.7M  191M   5% /boot/efi
tmpfs           1.5G     0  1.5G   0% /run/user/1000
/dev/sdb       1008G   72M  957G   1% /u01
/dev/sdc       1008G   72M  957G   1% /u02
[root@testclouddb ~]#