Monday, December 11, 2023

Creation of a standby database using RMAN DUPLICATE

As indicated in a previous post, we need the following information for setting up a standby database:
  • A running standy instance.
  • The password file of the primary database.
  • A standby version of the primary database's control file.
  • The data files of the primary database.
Next to this information setting up a standby database also required configuration taking place, such as static listener entries, instance parameters on both primary and standby database instances, etc.. Let's review all steps one at a time during the creation of our standby database named "berlin".

Creating the standby instance

For an RMAN DUPLCATE action we need a running auxiliary instance. For this we can create a dummy instance parameter file (in the example: /tmp/init.ora) with the following contents:
*.db_domain='mydomain.com'
*.db_name='thehague'
*.db_unique_name='berlin'
Then we'll start the auxiliary instance:
$ export ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1
$ PATH=${ORACLE_HOME}/bin:${PATH}
$ export ORACLE_SID=berlin
$ mkdir -p /u01/app/oracle/admin/berlin/adump /u01/oradata/berlin /u01/fra/berlin
$ sqlplus "/ as sysdba"
SQL> STARTUP NOMOUNT pfile='/tmp/init.ora';
SQL> EXIT

Configuring the password file

If you remember, we already copied over the password file in the previous blog post. In case these steps weren't take yet, when a password file doesn't exist already for the primary database, create one at the primary site using:
$ orapwd file=${ORACLE_HOME}/dbs/orapwthehague
The password file (name is orapw${ORACLE_SID} and is residing in ${ORACLE_HOME}/dbs) should then be copied over to the standby.mydomain.com machine for use with the standby database:
$ scp ${ORACLE_HOME}/dbs/orapwthehague standby.mydomain.com:/u01/app/oracle/product/19.20/dbhome_1/dbs/orapwberlin

Create the standby database

Creating the standby database using RMAN DUPLICATE involves less steps than a manual creation of a standby database. We simply use RMAN's ability to duplicate the datafiles, create the SPFILE and such. The target database on which to operate is the primary database (thehague in our case) and the auxiliary database is the database to which the copy the target database to (berlin in the given example):
[oracle@standby ~]$ rman target sys/oracle@thehague_dgmgrl auxiliary sys/oracle@berlin_dgmgrl
RMAN> RUN {
2>    DUPLICATE TARGET DATABASE
3>    FOR STANDBY
4>    FROM ACTIVE DATABASE
5>    SPFILE
6>       PARAMETER_VALUE_CONVERT 'thehague', 'berlin', 'THEHAGUE', 'BERLIN'
7>       SET db_name='thehague'
8>       SET db_unique_name='berlin'
9>       SET db_create_file_dest='/u01/oradata/berlin/'
10>       SET db_create_online_log_dest_1='/u01/oradata/berlin/'
11>       SET db_create_online_log_dest_2='/u01/fra/berlin/'
12>       SET db_file_name_convert='thehague','berlin','THEHAGUE','BERLIN'
13>       SET log_file_name_convert='thehague','berlin','THEHAGUE','BERLIN'
14>       SET standby_file_management='auto';
15> }
Starting Duplicate Db at 11-DEC-23
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=37 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   passwordfile auxiliary format  '/u01/app/oracle/product/19.20/dbhome_1/dbs/orapwberlin'   ;
   restore clone from service  'thehague_dgmgrl' spfile to
 '/u01/app/oracle/product/19.20/dbhome_1/dbs/spfileberlin.ora';
   sql clone "alter system set spfile= ''/u01/app/oracle/product/19.20/dbhome_1/dbs/spfileberlin.ora''";
}
executing Memory Script

Starting backup at 11-DEC-23
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1639 device type=DISK
Finished backup at 11-DEC-23

Starting restore at 11-DEC-23
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: restoring SPFILE
output file name=/u01/app/oracle/product/19.20/dbhome_1/dbs/spfileberlin.ora
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 11-DEC-23

sql statement: alter system set spfile= ''/u01/app/oracle/product/19.20/dbhome_1/dbs/spfileberlin.ora''

contents of Memory Script:
{
   sql clone "alter system set  audit_file_dest =
 ''/u01/app/oracle/admin/berlin/adump'' comment=
 '''' scope=spfile";
   sql clone "alter system set  control_files =
 ''/u01/oradata/berlin/BERLIN/controlfile/o1_mf_lq6r2lxd_.ctl'', ''/u01/fra/BERLIN/controlfile/o1_mf_lq6r2m6n_.ctl'' comment=
 '''' scope=spfile";
   sql clone "alter system set  dispatchers =
 ''(PROTOCOL=TCP) (SERVICE=berlinXDB)'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_name =
 ''thehague'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''berlin'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_create_file_dest =
 ''/u01/oradata/berlin/'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_create_online_log_dest_1 =
 ''/u01/oradata/berlin/'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_create_online_log_dest_2 =
 ''/u01/fra/berlin/'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_file_name_convert =
 ''thehague'', ''berlin'', ''THEHAGUE'', ''BERLIN'' comment=
 '''' scope=spfile";
   sql clone "alter system set  log_file_name_convert =
 ''thehague'', ''berlin'', ''THEHAGUE'', ''BERLIN'' comment=
 '''' scope=spfile";
   sql clone "alter system set  standby_file_management =
 ''auto'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  audit_file_dest =  ''/u01/app/oracle/admin/berlin/adump'' comment= '''' scope=spfile

sql statement: alter system set  control_files =  ''/u01/oradata/berlin/BERLIN/controlfile/o1_mf_lq6r2lxd_.ctl'',
''/u01/fra/BERLIN/controlfile/o1_mf_lq6r2m6n_.ctl'' comment= '''' scope=spfile

sql statement: alter system set  dispatchers =  ''(PROTOCOL=TCP) (SERVICE=berlinXDB)'' comment= '''' scope=spfile

sql statement: alter system set  db_name =  ''thehague'' comment= '''' scope=spfile

sql statement: alter system set  db_unique_name =  ''berlin'' comment= '''' scope=spfile

sql statement: alter system set  db_create_file_dest =  ''/u01/oradata/berlin/'' comment= '''' scope=spfile

sql statement: alter system set  db_create_online_log_dest_1 =  ''/u01/oradata/berlin/'' comment= '''' scope=spfile

sql statement: alter system set  db_create_online_log_dest_2 =  ''/u01/fra/berlin/'' comment= '''' scope=spfile

sql statement: alter system set  db_file_name_convert =  ''thehague'', ''berlin'', ''THEHAGUE'', ''BERLIN'' comment= '''' scope=spfile

sql statement: alter system set  log_file_name_convert =  ''thehague'', ''berlin'', ''THEHAGUE'', ''BERLIN'' comment= ''''
scope=spfile

sql statement: alter system set  standby_file_management =  ''auto'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    4294963240 bytes

Fixed Size                     9175080 bytes
Variable Size                805306368 bytes
Database Buffers            3472883712 bytes
Redo Buffers                   7598080 bytes

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/u01/oradata/berlin/BERLIN/controlfile/o1_mf_lq6r2lxd_.ctl'', ''/u01/fra/BERLIN/controlfile/o1_mf_lq6r2m6n_.ctl'' comment=
''Set by RMAN'' scope=spfile";
   restore clone from service  'thehague_dgmgrl' standby controlfile;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/u01/oradata/berlin/BERLIN/controlfile/o1_mf_lq6r2lxd_.ctl'',
''/u01/fra/BERLIN/controlfile/o1_mf_lq6r2m6n_.ctl'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 11-DEC-23
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=1595 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02
output file name=/u01/oradata/berlin/BERLIN/controlfile/o1_mf_lq6r2lxd_.ctl
output file name=/u01/fra/BERLIN/controlfile/o1_mf_lq6r2m6n_.ctl
Finished restore at 11-DEC-23

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
   set newname for tempfile  1 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_temp_lq6r38qq_.tmp";
   set newname for tempfile  2 to
 "/u01/oradata/berlin/THEHAGUE/datafile/temp012023-12-08_19-44-23-203-PM.dbf";
   set newname for tempfile  3 to
 "/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_temp_lq6sv86r_.dbf";
   switch clone tempfile all;
   set newname for datafile  1 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_system_lq6qzxfp_.dbf";
   set newname for datafile  3 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_sysaux_lq6r0plj_.dbf";
   set newname for datafile  4 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_undotbs1_lq6r15px_.dbf";
   set newname for datafile  5 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_system_lq6rzbhy_.dbf";
   set newname for datafile  6 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_sysaux_lq6rzbj8_.dbf";
   set newname for datafile  7 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_users_lq6r16sw_.dbf";
   set newname for datafile  8 to
 "/u01/oradata/berlin/THEHAGUE/datafile/o1_mf_undotbs1_lq6rzbjm_.dbf";
   set newname for datafile  9 to
 "/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_system_lq6sv862_.dbf";
   set newname for datafile  10 to
 "/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_sysaux_lq6sv86r_.dbf";
   set newname for datafile  11 to
 "/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_undotbs1_lq6sv86r_.dbf";
   set newname for datafile  12 to
 "/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_users_lq6svvwv_.dbf";
   restore
   from  nonsparse   from service
 'thehague_dgmgrl'   clone database
   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_temp_lq6r38qq_.tmp in control file
renamed tempfile 2 to /u01/oradata/berlin/THEHAGUE/datafile/temp012023-12-08_19-44-23-203-PM.dbf in control file
renamed tempfile 3 to /u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_temp_lq6sv86r_.dbf in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 11-DEC-23
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_system_lq6qzxfp_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_sysaux_lq6r0plj_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_undotbs1_lq6r15px_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_system_lq6rzbhy_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_sysaux_lq6rzbj8_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00007 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_users_lq6r16sw_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00008 to /u01/oradata/berlin/THEHAGUE/datafile/o1_mf_undotbs1_lq6rzbjm_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to
/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_system_lq6sv862_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00010 to
/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_sysaux_lq6sv86r_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00011 to
/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_undotbs1_lq6sv86r_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service thehague_dgmgrl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00012 to
/u01/oradata/berlin/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_users_lq6svvwv_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 11-DEC-23

sql statement: alter system archive log current

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=1155315390 file name=/u01/oradata/berlin/BERLIN/datafile/o1_mf_system_lqgd7vpq_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=1155315390 file name=/u01/oradata/berlin/BERLIN/datafile/o1_mf_sysaux_lqgd82v5_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=6 STAMP=1155315390 file name=/u01/oradata/berlin/BERLIN/datafile/o1_mf_undotbs1_lqgd8l1p_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=7 STAMP=1155315390 file
name=/u01/oradata/berlin/BERLIN/0C04F90FAB7B4936E0630B80A8C00EB5/datafile/o1_mf_system_lqgd8o3f_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=8 STAMP=1155315390 file
name=/u01/oradata/berlin/BERLIN/0C04F90FAB7B4936E0630B80A8C00EB5/datafile/o1_mf_sysaux_lqgd8r5z_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=9 STAMP=1155315390 file name=/u01/oradata/berlin/BERLIN/datafile/o1_mf_users_lqgd8vb2_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=10 STAMP=1155315390 file
name=/u01/oradata/berlin/BERLIN/0C04F90FAB7B4936E0630B80A8C00EB5/datafile/o1_mf_undotbs1_lqgd8wf0_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=11 STAMP=1155315390 file
name=/u01/oradata/berlin/BERLIN/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_system_lqgd8zj0_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=12 STAMP=1155315390 file
name=/u01/oradata/berlin/BERLIN/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_sysaux_lqgd96ml_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=13 STAMP=1155315390 file
name=/u01/oradata/berlin/BERLIN/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_undotbs1_lqgd99q2_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=14 STAMP=1155315390 file
name=/u01/oradata/berlin/BERLIN/0C052C8F8C754FD7E0630B80A8C0D630/datafile/o1_mf_users_lqgd9dsd_.dbf
Finished Duplicate Db at 11-DEC-23

RMAN> EXIT

Creating the Data Guard configuration

Make sure the Data Guard Broker process is running:
sqlplus "/ as sysdba"
SQL> ALTER SYSTEM SET dg_broker_start=true sid='*';
SQL> EXIT
Perform this action on both primary and standby sides if necessary. Now start up the Data Guard Manager on the primary side:
[oracle@primary ~]$ dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Mon Dec 11 16:57:57 2023
Version 19.20.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> CONNECT /
Connected to "thehague"
Connected as SYSDG.
DGMGRL> SHOW CONFIGURATION

Configuration - dg_demo

  Protection Mode: MaxPerformance
  Members:
  thehague - Primary database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 57 seconds ago)

DGMGRL> ADD DATABASE berlin AS CONNECT IDENTIFIER IS berlin_dgt;
Database "berlin" added
DGMGRL> EDIT DATABASE berlin SET PROPERTY staticconnectidentifier=berlin_dgmgrl;
Property "staticconnectidentifier" updated
DGMGRL> ENABLE DATABASE berlin
Enabled.
DGMGRL>SHOW CONFIGURATION
Configuration - dg_demo

  Protection Mode: MaxPerformance
  Members:
  thehague - Primary database
    berlin   - Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover:  Disabled

Configuration Status:
ERROR   (status updated 39 seconds ago)

DGMGRL> SHOW DATABASE berlin

Database - berlin

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 14 seconds ago)
  Apply Lag:          (unknown)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    berlin

  Database Warning(s):
    ORA-16854: apply lag could not be determined

Database Status:
WARNING

DGMGRL> /

Database - berlin

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 3 seconds ago)
  Apply Lag:          0 seconds (computed 3 seconds ago)
  Average Apply Rate: 3.97 MByte/s
  Real Time Query:    OFF
  Instance(s):
    berlin

Database Status:
SUCCESS

DGMGRL> EXIT
And, YESSSS!!! We once again succeeded in creating a standby database environment!

Saturday, December 9, 2023

Manual creation of a physical standby database using an RMAN backup

As indicated in a previous post, we need the following information for setting up a standby database:
  • A running standy instance.
  • The password file of the primary database.
  • A standby version of the primary database's control file.
  • The data files of the primary database.
Next to this information setting up a standby database also required configuration taking place, such as static listener entries, instance parameters on both primary and standby database instances, etc.. Let's review all steps one at a time during the creation of our standby database named "berlin".

Creating the standby instance

The first requirement can then be met by e.g. creating an instance parameter file ("init.ora" or pfile) based on the server parameter file (spfile) of the primary database, alter the database unique name and augment any creation locations to match the standby database environnment to be set up. Create a file with the following contents and name it /tmp/initberlin.ora:
*.audit_file_dest='/u01/app/oracle/admin/berlin/adump'
*.audit_trail='none'
*.compatible='19.0.0'
*.control_files='/u01/oradata/berlin/control01.ctl','/u01/fra/berlin/control02.ctl'
*.db_block_size=8192
*.db_create_file_dest='/u01/oradata/berlin/'
*.db_create_online_log_dest_1='/u01/oradata/berlin/'
*.db_create_online_log_dest_2='/u01/fra/berlin/'
*.db_domain='mydomain.com'
*.db_name='thehague'
*.db_unique_name='berlin'
*.db_recovery_file_dest='/u01/fra/berlin'
*.db_recovery_file_dest_size=8g
*.diagnostic_dest='/u01/app/oracle'
*.enable_pluggable_database=true
*.open_cursors=300
*.pga_aggregate_target=512m
*.processes=200
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=2g
*.undo_tablespace='UNDOTBS1'
Alternatively, create a parameter file from the primary database's spfile and make the necesary adjustments in that pfile. After having created the instance parameter file on the standby side, create any necessary directories on the standby machine and start the instance:
$ export ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1
$ PATH=${ORACLE_HOME}/bin:${PATH}
$ export ORACLE_SID=berlin
$ mkdir -p /u01/app/oracle/admin/berlin/adump /u01/oradata/berlin /u01/fra/berlin
$ sqlplus "/ as sysdba"
SQL> CREATE spfile FROM pfile='/tmp/initberlin.ora';
SQL> STARTUP NOMOUNT;
Note that we have created a skeleton server parameter file. Some adjustments for a standby environment still have to be made:
SQL> ALTER SYSTEM SET db_file_name_convert='thehague','berlin','THEHAGUE','BERLIN' sid='*' scope=spfile;
SQL> ALTER SYSTEM SET log_file_name_convert='thehague','berlin','THEHAGUE','BERLIN' sid='*' scope=spfile;
SQL> ALTER SYSTEM SET standby_file_management=auto sid='*' scope=spfile;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP NOMOUNT;
SQL> EXIT
Remember that we performed similar actions already on the primary database side.

Configuring the password file

If you remember, we already copied over the password file in the previous blog post. In case these steps weren't take yet, when a password file doesn't exist already for the primary database, create one at the primary site using:
$ orapwd file=${ORACLE_HOME}/dbs/orapwthehague
The password file (name is orapw${ORACLE_SID} and is residing in ${ORACLE_HOME}/dbs) should then be copied over to the standby.mydomain.com machine for use with the standby database:
$ scp ${ORACLE_HOME}/dbs/orapwthehague standby.mydomain.com:/u01/app/oracle/product/19.20/dbhome_1/dbs/orapwberlin

Prepare the creation of a copy of the database

To create a copy of your database, you either create a database from a running database (will attempt to create another blog for that), or - as we'll use here - use a full database backup you restore on the standby side. To create a backup, go to the primary side and execute:
$ rman target /
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP on;
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
RMAN> EXIT
In my case, this left me with a backup at /u01/fra/THEHAGUE:
[oracle@primary ~]$ find /u01/fra/THEHAGUE -type f -print | egrep -vi "archivelog|onlinelog|controlfile"
/u01/fra/THEHAGUE/backupset/2023_12_08/o1_mf_annnn_TAG20231208T201102_lq6tl6ox_.bkp
/u01/fra/THEHAGUE/backupset/2023_12_08/o1_mf_nnndf_TAG20231208T201103_lq6tl81c_.bkp
/u01/fra/THEHAGUE/backupset/2023_12_08/o1_mf_annnn_TAG20231208T201209_lq6tnb0r_.bkp
/u01/fra/THEHAGUE/0C052C8F8C754FD7E0630B80A8C0D630/backupset/2023_12_08/o1_mf_nnndf_TAG20231208T201103_lq6tm13h_.bkp
/u01/fra/THEHAGUE/0C04F90FAB7B4936E0630B80A8C00EB5/backupset/2023_12_08/o1_mf_nnndf_TAG20231208T201103_lq6tmj9o_.bkp
/u01/fra/THEHAGUE/autobackup/2023_12_08/o1_mf_s_1155067931_lq6tncqg_.bkp
These backup sets need to get transferred to the standby site for further processing:
$ scp -r /u01/fra/THEHAGUE/backupset/2023_12_08 standby.mydomain.com:/u01/fra
$ scp -r /u01/fra/THEHAGUE/0BEF010CD3EC123BE0630B80A8C0AF3E/backupset/2023_12_08 standby.mydomain.com:/u01/fra
$ scp -r /u01/fra/THEHAGUE/0BEECE71C9E40CCEE0630B80A8C0CF98/backupset/2023_12_08 standby.mydomain.com:/u01/fra
$ scp -r /u01/fra/THEHAGUE/autobackup/2023_12_08 standby.mydomain.com:/u01/fra
Note that the autobackup backup set is the one containing the control file and spfile backups.

Configuring the standby control file

A standby database requires a standby control file. This is a special kind of control file by which the database will identify its standby role. You can either restore it from a backup, or use SQL*Plus on the primary database side to create such a control file:
$ sqlplus "/ as sysdba"
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/sbcf_berlin.cf';
SQL> EXIT
$ scp /tmp/sbcf_berlin.cf standby.mydomain.com:/tmp
followed by a manual copy of this file to the control file locations.

In our case we'll simply restore it from the autobackup of the control file of the primary database. Log on to the standby database instance (now running already in NOMOUNT mode) from RMAN and restore the control file:
$ rman target /
RMAN> RESTORE STANDBY CONTROLFILE FROM '/u01/fra/2023_10_22/o1_mf_s_1150878051_lm9ht463_.bkp';
RMAN> ALTER DATABASE MOUNT;

Initiating a copy of the database

Note that for ease of use I've copied over all data to the /u01/fra directory on the standby.mydomain.com VM. Of course with any large database this is not a good idea (and won't likely fit). Normally data is either transferred to storage with plenty of disk space available, or by means of shared storage where both primary and standby hosts can read.

To be able to use the copied backup we need to make sure RMAN can find the backup pieces. You do this by removing the current backup information from the standby control file and catalog the correct backup pieces on the standby side:
RMAN> CHANGE BACKUP UNCATALOG NOPROMPT;
RMAN> CATALOG START WITH '/u01/fra/2023_12_08/' NOPROMPT;
No it's time to restore the database:
RMAN> RUN {
  SET NEWNAME FOR database TO new;
  RESTORE DATABASE;
  SWITCH DATAFILE ALL;
  RECOVER DATABASE;
}
Now we have a copy of the database running. Prepare for a Data Guard setup by starting the Data Guard Broker process:
$ sqlplus "/ as sysdba"
SQL> ALTER SYSTEM SET dg_broker_start=true sid='*';
SQL> EXIT
Note: in a real environment you first set the DG_BROKER_CONFIG_FILE1 and DG_BROKER_CONFIG_FILE2 instance parameters to appropriate locations. As this is a play environment we simply let the Broker configuration files reside in the $ORACLE_HOME/dbs directory.

Creating the standby redo log files

For a standby database to operate properly we need to add standby redo log files on both primary and standby database sides. First, navigate to the primary side and check how large the online redo log files are:
sqlplus "/ as sysdba"
SQL> SELECT group#, bytes FROM v$log ORDER BY 1;

    GROUP#      BYTES
---------- ----------
         1  209715200
         2  209715200
         3  209715200
We need to create 1 more standby log group than the total # of active online log groups for a thread. As we're single-instance we deal with only 1 thread. As the example shows, there are 3 online log groups, each with members of a size of 200MB. So we'll create 4 standby log groups of the same size:
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 101 SIZE 200M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 102 SIZE 200M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 103 SIZE 200M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 104 SIZE 200M;

Database altered.

SQL> EXIT
Similarly we need to go to the standby environment and execute the same commands. Should you receive the error
ORA-01156: recovery or flashback in progress may need access to files
then you need to first disable any managed recovery:
SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;
Media recovery complete.
When standby redo log groups have been created on both primary and standby database sides, let's proceed to build up the Data Guard environment!

Creating the Data Guard configuration

Navigate back to the primary database side. First make sure the Data Guard Broker process is running here as well:
sqlplus "/ as sysdba"
SQL> ALTER SYSTEM SET dg_broker_start=true sid='*';
SQL> EXIT
Now start up the Data Guard Manager:
$ dgmgrl
DGMGRL> connect /
Connected to "thehague"
Connected as SYSDG.
Create a new configuration named dg_demo:
DGMGRL> CREATE CONFIGURATION dg_demo AS PRIMARY DATABASE IS thehague CONNECT IDENTIFIER IS thehague_dgt;
Configuration "dg_demo" created with primary database "thehague"
DGMGRL> EDIT DATABASE thehague SET PROPERTY staticconnectidentifier=thehague_dgmgrl;
Property "staticconnectidentifier" updated
Now it's time to add the standby database to this configuration:
DGMGRL> ADD DATABASE berlin AS CONNECT IDENTIFIER IS berlin_dgt;
Database "berlin" added
DGMGRL> EDIT DATABASE berlin SET PROPERTY staticconnectidentifier=berlin_dgmgrl;
Property "staticconnectidentifier" updated
DGMGRL> SHOW CONFIGURATION

Configuration - dg_demo

  Protection Mode: MaxPerformance
  Members:
  thehague - Primary database
    berlin   - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
DISABLED

DGMGRL> ENABLE CONFIGURATION;
Enabled.
DGMGRL> SHOW CONFIGURATION

Configuration - dg_demo

  Protection Mode: MaxPerformance
  Members:
  thehague - Primary database
    berlin   - Physical standby database
      Warning: ORA-16854: apply lag could not be determined

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 6 seconds ago)
Should you receive any warnings related to the apply lag of the standby database, then check the standby status:
DGMGRL> SHOW DATABASE berlin

Database - berlin

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      2 minutes (computed 7 seconds ago)
  Apply Lag:          (unknown)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    berlin

  Database Warning(s):
    ORA-16854: apply lag could not be determined
    ORA-16855: transport lag has exceeded specified threshold

Database Status:
WARNING
In this case, simply disable and re-enable the standby database to get the redo transport moving:
DGMGRL> DISABLE DATABASE berlin
Disabled.
DGMGRL> ENABLE DATABASE berlin
Enabled.
Now the Data Guard environment should be operational:
DGMGRL> SHOW DATABASE berlin

Database - berlin

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 53.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    berlin

Database Status:
SUCCESS

DGMGRL> SHOW CONFIGURATION

Configuration - dg_demo

  Protection Mode: MaxPerformance
  Members:
  thehague - Primary database
    berlin   - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 31 seconds ago)
And, YESSSS!!! We succeeded in creating a standby database environment!

Monday, October 23, 2023

Preparing for an Oracle standby database

In the Oracle environment the concept of a standby database is one of the major methods of providing disaster recovery. Oracle knows of 2 types of standby databases:
  • physical standby: where the standby database contains exactly the same as the primary database, and normal recovery is performed using the archived redo stream of the primary database
  • logical standby: where the standby database may deviate from the primary database, so no physical consistency exists. The redo stream of the primary database is mined for SQL statements which are applied transactionally on the standby database, making the standby database logically look the same as the primary database.
In terms of disaster recovery the primary database is most likely chosen, as you can switch roles between primary and standby database without any further configuration steps other than setting up the standby database. This is because both databases look exactly like one another: the only noticable difference is the database's unique name. We will focus on the physical standby database from now on in this post and where I refer to "standby database" the physical variant is meant.

With setting up a standby environment we immediately are confronted with the big difference between the Standard Edition 2 (SE2) and Enterprise Edition (EE) flavors of the Oracle RDBMS. With EE we have a plethora of options, from manual to almost full automated setups and even with opening the standby database (in snapshot mode, or - in case the Active Data Guard license has been bought - read-write mode). SE2 doesn't have all this functionality. There it comes down to feeding archived redo to the standby database to have the standby database remain up to par with the primary database. This particular post will focus on the EE version of Oracle.

Preparations

Setting up a standby database environment first of all requires a machine where we can run such a standby database. As we already have created a VM called "primary" with a full stack of Oracle software installed it seems obvious that we use that VM as basis for the to-be-created "standby" VM. VirtualBox allows you to create a full clone of a VM using a few clicks. Follow along! The steps to clone involve:
  1. Cleanly shut down the primary database VM. This is required to get a consistent state of both VMs.
  2. Next, right-click on the primary VM and choose "Clone..." from the popup menu:
  3. A new dialog will open where we'll enter the details of the clone VM. Note that I specifically choose to change the MAC addresses for all network adapters:
  4. Specify that we wish to have to full clone so the standby VM will have a completely independent lifespan of the primary VM:
    When pressing the "Finish" button the clone operation will commence:
Now we have clone of the primary VM called "Standby" within VirtualBox. However... the guest OS still thinks it's called "primary.mydomain.com". Next to that, as we've changed the MAC addresses, the new MAC addresses cause DHCP to be enabled for the network interfaces. So within the VM itself there is some work to be done as well! In order to correct the TCP/IP protocol stack setup in the cloned VM, start it up first and log in as the "root" user (remember: it's a clone of the primary VM, so you can use the same password as the one used for that VM). Once logged in, start "nmtui" (the Network Manager text user interface):
Within Network Manager, first set the machine's host name:
Next, for the enp0s3 interface, adjust the IPv4 settings and disable IPv6:
When everything has been configured, exit Network Manager and reboot for the changes (especially the host name change) to take effect. We now have 2 environments with an installed Oracle stack. Now let's get our hands dirty!

Prerequisites for a physical standby database

So... what do we need for a physical standby database? Well, that's quite easy:
  • First we need a running instance on the standby side with of course the same database name set (governed through the DB_NAME instance parameter), but with a different unique name (DB_UNIQUE_NAME instance parameter).
  • We also need a copy of the primary password file. If one doesn't exist at the primary site, then create one and copy that one to the standby site.
  • Let's not forget the static listener entries en TNS aliases which we'll use for the databases to communicate with each other.
  • In order to create a copy of your primary database you need a full database backup of the primary database, including a standby control file.
  • We also need to set a bunch of instance parameters for proper file creation, file name conversion between primary and standby sites, etc.
Let's take the show to Europe with a primary database called "thehague" (with PDB "mypdb1") and and a standby database named "berlin".

creating the primary database

First create the primary database if it doesn't already exist. For this we'll use a response file named /u01/stage/cdb.rsp with the following contents:
responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v12.2.0
gdbName=thehague.mydomain.com
sid=thehague
databaseConfigType=SI
RACOneNodeServiceName=
policyManaged=false
createServerPool=false
serverPoolName=
cardinality=
force=false
pqPoolName=
pqCardinality=
createAsContainerDatabase=true
numberOfPDBs=1
pdbName=mypdb1
useLocalUndoForPDBs=true
pdbAdminPassword=Welcome1
nodelist=
templateName=/u01/app/oracle/product/19.20/dbhome_1/assistants/dbca/templates/General_Purpose.dbc
sysPassword=oracle
systemPassword=oracle
serviceUserPassword=
emConfiguration=
emExpressPort=5500
runCVUChecks=FALSE
dbsnmpPassword=oracle
omsHost=
omsPort=0
emUser=
emPassword=
dvConfiguration=false
dvUserName=
dvUserPassword=
dvAccountManagerName=
dvAccountManagerPassword=
olsConfiguration=false
datafileJarLocation=/u01/app/oracle/product/19.20/dbhome_1/assistants/dbca/templates/
datafileDestination=/u01/oradata/thehague/
recoveryAreaDestination=/u01/fra/thehague/
storageType=FS
characterSet=AL32UTF8
nationalCharacterSet=AL16UTF16
registerWithDirService=false
walletPassword=
listeners=
variablesFile=
variables=ORACLE_BASE_HOME=/u01/app/oracle/product/19.20/dbhome_1,DB_UNIQUE_NAME=thehague,ORACLE_BASE=/u01/app/oracle,PDB_NAME=mypdb1,DB_NAME=thehague,ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1,SID=thehague
initParams=audit_trail=none,audit_file_dest=/u01/app/oracle/admin/thehague/adump,compatible=19.0.0,db_block_size=8192,db_domain=mydomain.com,db_name=thehague,db_unique_name=thehague,nls_language=American,nls_territory=America,db_create_file_dest=/u01/oradata/thehague/,db_create_online_log_dest_1=/u01/oradata/thehague/,db_create_online_log_dest_2=/u01/fra/thehague/,db_recovery_file_dest=/u01/fra/thehague/,db_recovery_file_dest_size=8G,undo_tablespace=UNDOTBS1,open_cursors=300,processes=200,sga_target=4G,pga_aggregate_target=512M,diagnostic_dest=/u01/app/oracle,dispatchers=(PROTOCOL=TCP)(SERVICE=thehagueXDB),remote_login_passwordfile=EXCLUSIVE
sampleSchema=false
databaseType=MULTIPURPOSE
automaticMemoryManagement=false
Creation of a database is done by simply executing:
$ export ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1
$ PATH=${ORACLE_HOME}/bin:${PATH}
$ dbca -silent -createDatabase -responseFile /u01/stage/cdb.rsp
After a little while we'll have a container database called "thehague" we can use for creating a standby. But... first we need to make sure to implement an important action that frequently is forgotten: the configuration of the primary database using similar parameters in case the roles between primary and standby databases are switched and the old primary database then becomes standby! So, on the primary side we need to perform the following actions:
$ sqlplus "/ as sysdba"
SQL> ALTER SYSTEM SET db_file_name_convert='berlin','thehague','BERLIN','THEHAGUE' sid='*' scope=spfile;
SQL> ALTER SYSTEM SET log_file_name_convert='berlin','thehague','BERLIN','THEHAGUE' sid='*' scope=spfile;
SQL> ALTER SYSTEM SET standby_file_management=auto sid='*' scope=spfile;
SQL> SHUTDOWN IMMEDIATE;
Now start the database instance again for the above parameters to take effect, and at the same time make sure that ARCHIVELOG mode is enabled:
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;
SQL> EXIT
Now that the database is running in ARCHIVELOG mode, make sure we prepare it for a Data Guard setup:
$ rman target /
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP on;
RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
RMAN> EXIT
Ignore the RMAN-8591 message given.

Configuring the password file

When a password file doesn't exist already for the primary database, create one at the primary site using:
$ orapwd file=${ORACLE_HOME}/dbs/orapwthehague
The password file (name is orapw${ORACLE_SID} and is residing in ${ORACLE_HOME}/dbs) should then be copied over to the standby.mydomain.com machine for use with the standby database:
$ scp ${ORACLE_HOME}/dbs/orapwthehague standby.mydomain.com:/u01/app/oracle/product/19.20/dbhome_1/dbs/orapwberlin

Configuring the network connectivity

For two databases to communicate with one another we have to set up TNS aliases so we can refer to logical names when we wish to access a particular database address. Next to that of course we need a listener configured which services incoming database requests. Let's start with the listener on the primary database side. Create the /u01/app/oracle/product/19.20/dbhome_1/network/admin/listener.ora file with the following contents:
LISTENER =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = tcp)(HOST = primary.mydomain.com)(PORT = 1521))
    (ADDRESS = (PROTOCOL = ipc)(KEY = LSNR_PRIM))
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = thehague_dgb)
      (SID_NAME = thehague)
      (ORACLE_HOME = /u01/app/oracle/product/19.20/dbhome_1)
    )
    (SID_DESC =
      (SID_NAME = thehague)
      (ORACLE_HOME = /u01/app/oracle/product/19.20/dbhome_1)
    )
  )
After creation of the listener.ora, start the listener:
$ lsnrctl start
Similarly we need a /u01/app/oracle/product/19.20/dbhome_1/network/admin/listener.ora on the standby.mydomain.com machine:
LISTENER =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = tcp)(HOST = standby.mydomain.com)(PORT = 1521))
    (ADDRESS = (PROTOCOL = ipc)(KEY = LSNR_STBY))
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = berlin_dgb)
      (SID_NAME = berlin)
      (ORACLE_HOME = /u01/app/oracle/product/19.20/dbhome_1)
    )
    (SID_DESC =
      (SID_NAME = berlin)
      (ORACLE_HOME = /u01/app/oracle/product/19.20/dbhome_1)
    )
  )
On the standby we need to start the listener after creation of the listener.ora as well:
$ lsnrctl start
Next, we'll create a /u01/app/oracle/product/network/admin/tnsnames.ora file (as we don't have an LDAP server at hand) with 2 aliases per database:
  • One for the redo transport. We'll call this alias <dbname>_dgt.
  • One for instance management. We'll call this alias <dbname>_dgmgrl.
The tnsnames.ora contents reads:
thehague_dgt =
  (DESCRIPTION =
    (ADDRESS =
      (PROTOCOL = tcp)
      (HOST = primary.mydomain.com)
      (PORT = 1521)
    )
    (CONNECT_DATA =
      (SERVICE_NAME = thehague)
      (SERVER = dedicated)
    )
  )

thehague_dgmgrl =
  (DESCRIPTION =
    (ADDRESS =
      (PROTOCOL = tcp)
      (HOST = primary.mydomain.com)
      (PORT = 1521)
    )
    (CONNECT_DATA =
      (SERVICE_NAME = thehague)
      (SERVER = dedicated)
      (UR = A)
    )
  )

berlin_dgt =
  (DESCRIPTION =
    (ADDRESS =
      (PROTOCOL = tcp)
      (HOST = standby.mydomain.com)
      (PORT = 1521)
    )
    (CONNECT_DATA =
      (SERVICE_NAME = berlin)
      (SERVER = dedicated)
    )
  )

berlin_dgmgrl =
  (DESCRIPTION =
    (ADDRESS =
      (PROTOCOL = tcp)
      (HOST = standby.mydomain.com)
      (PORT = 1521)
    )
    (CONNECT_DATA =
      (SERVICE_NAME = berlin)
      (SERVER = dedicated)
      (UR = A)
    )
  )
Create this file on the primary side and copy it over to the standby machine in the corresponding directory.

Monday, October 16, 2023

Installing Oracle19c on Linux

Installation of Oracle19c on Linux is almost like a walk in the park. For this setup we'll choose a virtual machine setup with the following characteristics:
  • VirtualBox 7.0.10
  • For the guest OS: Oracle Enterprise Linux 8.8
  • Oracle19c version 19.20 (the 19.3 base release with the 19.20 Release Update and the latest Monthly Recommended Patches bundle and JVM patch) together with the latest OPatch release
In this post I will use my own DNS server setup. If not having installed or configured a DNS server yet, please do so, as you'll need the configuration information of this particular server when setting up the guest OS within the VM. The steps I followed for my installation include:
  • configure a VM for running Oracle19c
  • installing OEL8.8 into the VM
  • installing the Oracle19c software
  • create a container database
  • configure a listener for incoming database connections
The environment will have the following characteristics:
  • name: primary.mydomain.com
  • IP address: 192.168.128.11
  • DNS server: 192.168.128.254
  • container database name: v19cdb
  • PDB name: v19pdb1
The following software will be downloaded upfront:

Configure a VM for running Oracle19c

To create a VM for use with the Oracle19c software we can roughly follow the steps outlined in my DNS server post, with the following changes:
  • A 60 GB disk should be created to support:
    • a staging area with the unpacked Oracle software bundles
    • a directory containing the Oracle RDBMS software installation (the so-called Oracle Home)
    • a directory containing the database files
    • a directory containing recovery structures for the database (the so-called Fast Recovery Area, or FRA for short)
  • 2 network interfaces: one for a host-only network which can contact the DNS server, and one for access to the public network (for e.g. system updates)
  • 8 GB of memory
Simply create the VM with the above characteristics and attach the OEL8.8 ISO image.

Installing OEL8.8 into the VM

The OS installation can also be done using the same steps as outlined in my DNS server blog. Make sure that the IP address is set properly and that the DNS client configuration (i.e.: the last commands from the DNS setup blog) have been executed properly, so the new VM will be able to interact properly with our own DNS server. The one action I perform differently as compared to the other VM setups is that here I choose to have my own special partitioning scheme to maximize the disk size for the Oracle software:
When pressing the "Done" button at the top left of the screen a new window will pop up where I can select my own partitioning scheme. First select the "create automatically" link to get a default setup:
Now it's time to adjust this default setup. We won't be using the home directories of individual users much, as we'll store all Oracle installation files in the /u01 directory structure on the / filesystem. Select the "/home" mount point on the left side of the screen and then on the right lower the size in the "Desired Capacity" field:
Next, enlarge the "/" filesystem to a maximum size. To do this, e.g. enter a size larger than the disk size in the "Desired Capacity" field and click e.g. back on the "/home" link. This will auto-fill the desired capacity to the maximum allowed size:
When finished, press the "Done" button in the top left of the screen to continue the installation. This will prompt a confirmation. Press "Accept Changes" and continue with the installation:
Make sure that in the end the DNS server is queryable and that the guest OS has been updated to the latest security fixes.

Installing the Oracle19c software

In order to run Oracle19c on Linux you first need to install an Oracle-provided RPM to set up the OS with an "oracle" account:
# dnf install -y oracle-database-preinstall-19c
Once the Oracle RPM is installed, let's provide the "oracle" account with a password (do so if you're confident with using an open account; if not, make sure you can use "sudo" to use the "oracle" account!) and create the necessary directory structures for use with Oracle:
# passwd oracle
# mkdir -p /u01/app/oracle/product/19.20/dbhome_1
# mkdir -p /u01/app/oraInventory
# mkdir -p /u01/oradata
# mkdir -p /u01/fra
# mkdir /u01/stage
# chown -R oracle:oinstall /u01/app/oracle /u01/app/oraInventory /u01/oradata /u01/fra /u01/stage
# chmod -R 775 /u01/app/oracle /u01/app/oraInventory /u01/oradata /u01/fra /u01/stage
Now prepare the OS for creation of a database already:
# echo vm.nr_hugepages=20000 >> /etc/sysctl.conf
# sysctl -p
It's time to move on to the installation of the Oracle software. Assume the identity of the "oracle" user:
# sudo -iu oracle
Place all Oracle related ZIP files (downloaded from Oracle Technology Network - base release 19.3 - and My Oracle Support - the RU and MRP bundle as well as the latest OPatch bundle) into /u01/stage so we can start preparing for the actual installation. Then extract all required files:
$ unzip /u01/stage/LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19.20/dbhome_1
$ mv /u01/app/oracle/product/19.20/dbhome_1/OPatch /u01/app/oracle/product/19.20/dbhome_1/OPatch.old
$ unzip p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.20/dbhome_1
$ mkdir /u01/stage/RU
$ mkdir /u01/stage/MRP
$ mkdir /u01/stage/OneOffs
$ unzip /u01/stage/p35320081_190000_Linux-x86-64.zip -d /u01/stage/RU
$ unzip /u01/stage/p35755640_1920000DBRU_Linux-x86-64.zip -d /u01/stage/MRP
$ unzip /u01/stage/p35336174_190000_Linux-x86-64.zip -d /u01/stage/OneOffs
We've set the stage for installation! Next step is to create a response file for runInstaller to use. For this setup we use the response file named /u01/stage/swinstall.rsp:
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v19.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=dba
INVENTORY_LOCATION=/u01/app/oraInventory
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=racdba
oracle.install.db.rootconfig.executeRootScript=false
Performing the installation can now be done with some simple steps. Note that we need to set the distribution of the installation to OEL7.8 to have runInstaller not to fail with an unsupported OS check assertion:
$ export CV_ASSUME_DISTID=OEL7.8
$ export ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1
$ PATH=${ORACLE_HOME}/bin:${PATH}
${ORACLE_HOME}/runInstaller -silent -applyRU /u01/stage/RU/35320081 -applyOneOffs /u01/stage/MRP/35755640/33974554,/u01/stage/MRP/35755640/35156218,/u01/stage/MRP/35755640/35182029,/u01/stage/MRP/35755640/35395648,/u01/stage/MRP/35755640/35499173,/u01/stage/MRP/35755640/35598272,/u01/stage/MRP/35755640/35598911,/u01/stage/MRP/35755640/35635081,/u01/stage/MRP/35755640/35646719,/u01/stage/MRP/35755640/35655714,/u01/stage/MRP/35755640/35775859,/u01/stage/MRP/35755640/35809185,/u01/stage/OneOffs/35336174 -responseFile /u01/stage/swinstall.rsp
Note that for convenience we both install the base release and patch up to the latest patches at the same time without hardly any user interference. The only thing we need to do after installation is to run a couple of scripts within the security context of the "root" account:
# /u01/app/oraInventory/orainstRoot.sh
# /u01/app/oracle/product/19.20/dbhome_1/root.sh
Thank to persons like Mike Dietrich and Daniel Overby Hansen installing and patching at the same time is a lot easier than it used to be!

Create a container database

We're up to the point that the OS is configured and the Oracle software tree has been installed and patched. Now awaits the final step of this exercise: creation of a container database. This also can be done with minimal effort using a silent database creation. Note that in a previous step we already configured Linux' HugePages support so we can configure an instance with a 4 GB SGA. It's time to create the /u01/stage/credb.rsp response file:
responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v12.2.0
gdbName=v19cdb.mydomain.com
sid=v19cdb
databaseConfigType=SI
RACOneNodeServiceName=
policyManaged=false
createServerPool=false
serverPoolName=
cardinality=
force=false
pqPoolName=
pqCardinality=
createAsContainerDatabase=true
numberOfPDBs=1
pdbName=v19pdb1
useLocalUndoForPDBs=true
pdbAdminPassword=Welcome1
nodelist=
templateName=/u01/app/oracle/product/19.20/dbhome_1/assistants/dbca/templates/General_Purpose.dbc
sysPassword=oracle
systemPassword=oracle
serviceUserPassword=
emConfiguration=
emExpressPort=5500
runCVUChecks=FALSE
dbsnmpPassword=oracle
omsHost=
omsPort=0
emUser=
emPassword=
dvConfiguration=false
dvUserName=
dvUserPassword=
dvAccountManagerName=
dvAccountManagerPassword=
olsConfiguration=false
datafileJarLocation=/u01/app/oracle/product/19.20/dbhome_1/assistants/dbca/templates/
datafileDestination=/u01/oradata/{DB_UNIQUE_NAME}/
recoveryAreaDestination=/u01/fra/{DB_UNIQUE_NAME}/
storageType=FS
diskGroupName=
asmsnmpPassword=
recoveryGroupName=
characterSet=AL32UTF8
nationalCharacterSet=AL16UTF16
registerWithDirService=false
dirServiceUserName=
dirServicePassword=
walletPassword=
listeners=
variablesFile=
variables=ORACLE_BASE_HOME=/u01/app/oracle/product/19.20/dbhome_1,DB_UNIQUE_NAME=v19cdb,ORACLE_BASE=/u01/app/oracle,PDB_NAME=v19pdb1,DB_NAME=v19cdb,ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1,SID=v19cdb
initParams=audit_trail=none,audit_file_dest=/u01/app/oracle/admin/v19cdb/adump,compatible=19.0.0,db_block_size=8192,db_domain=mydomain.com,db_name=v19cdb,db_unique_name=v19cdb,nls_language=Dutch,nls_territory=The Netherlands,db_create_file_dest=/u01/oradata/v19cdb/,db_recovery_file_dest=/u01/fra/v19cdb,db_recovery_file_dest_size=8G,undo_tablespace=UNDOTBS1,open_cursors=300,processes=2000,sga_target=4G,pga_aggregate_target=512M,diagnostic_dest=/u01/app/oracle,dispatchers=(PROTOCOL=TCP)(SERVICE=v19cdbXDB),remote_login_passwordfile=EXCLUSIVE
sampleSchema=false
memoryPercentage=40
databaseType=MULTIPURPOSE
automaticMemoryManagement=false
This response file defines some default parameter values, locations and SGA size. Creating a database is as simple as:
$ export ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1
$ PATH=${ORACLE_HOME}/bin:${PATH}
$ dbca -silent -createDatabase -responseFile /u01/stage/credb.rsp
We'll receive some warnings on the default chosen passwords, but that's expected. In the end a CDB is created with one PDB, as supplied in the response file:

Configure a listener for incoming database connections

Rests us one final step: configuring a listener so we can service incoming database connections. Even this can be done without any user interaction using solely a response file supplied to a wizard. For this, create the /.u01/stage/netca.rsp response file:
[GENERAL]
RESPONSEFILE_VERSION="19.0"
CREATE_TYPE="CUSTOM"
SHOW_GUI=false
LOG_FILE=""/u01/app/oracle/cfgtoollogs/netca/netca.log""

[oracle.net.ca]
INSTALLED_COMPONENTS={"server","net8","javavm"}
INSTALL_TYPE=""typical""
LISTENER_NUMBER=1
LISTENER_NAMES={"LISTENER"}
LISTENER_PROTOCOLS={"TCP;1521"}
LISTENER_START=""LISTENER""
NAMING_METHODS={"TNSNAMES","HOSTNAME"}
NSN_NUMBER=1
NSN_NAMES={"EXTPROC_CONNECTION_DATA"}
NSN_SERVICE={"PLSExtProc"}
NSN_PROTOCOLS={"TCP;primary.mydomain.com;1521"}
Calling Net Configuration Assistant allows the definition of the listener:
$ export ORACLE_HOME=/u01/app/oracle/product/19.20/dbhome_1
$ PATH=${ORACLE_HOME}/bin:${PATH}
$ netca -silent -responsefile /u01/stage/netca.rsp
This results in listener capable of servicing our freshly created PDB:
Our database is now accessible through the mydomain.com network domain!

Sunday, October 15, 2023

DNS in a test environment

A playground for Oracle software most likely requires a DNS (Domain Name System) server somewhere in the network, whether it be for a simple Data Guard setup, or for checking out what Real Application Clusters (RAC for short) has to offer.

Setting up a minimal DNS server for your own private domain is relatively simple to perform using a tool such as VirtualBox. The steps I present here will use the BIND (Berkeley Internet Name Domain) implementation which is freely available on Linux, run from within a VirtualBox environment.

The setup I present here uses the following pieces of software:
  • VirtualBox 7.0.10
  • For the guest OS: Oracle Enterprise Linux 9.2

Installing VirtualBox is a trivial task and won't be discussed here. Detailed configuration of DNS also falls outside the scope of this blog. Refer to e.g. the BIND documentation on the Red Hat site for more details on the intricacies of BIND.

The steps I followed for setting up my own DNS server include:
  • configure a host-only network in VirtualBox
  • configure a VM for running BIND
  • installing OEL9.2 into the VM
  • configuring BIND
Using these steps a DNS server will be built with the following characteristics:
  • the domain serviced is named mydomain.nl
  • the DNS server will have a fixed IPv4 address of 192.168.128.254
  • the DNS server will service IPv4 addresses in the 192.168.128.x subnet
  • the subnet gateway is located at 192.168.128.1
  • DNS will be setup containing several machines for the various local Oracle tests, including Data Guard and RAC

As indicated, this blog post is making use of Oracle Enterprise Linux 9.2. Red Hat Enterprise Linux 9 (or any derivatives) should also be usable; for other types of systems, please consult the documentation on what utilities to use and system packages to install, and at what locations the various configuration files are located.

Configuring a host-only network in VirtualBox

The first step in the setup is to make sure we have a network to communicate over. As this DNS server will only be used by VMs within VirtualBox on a single host machine, setting up a host-only network is the obvious choice to implement such a network. For this, create a new host-only network and assign a specific IP address range to this network. Open the Network Manager within VirtualBox:
VirtualBox Network Manager
From here, create a new host-only network, select the "Configure Adapter Manually" radio button and enter the IP address of the subnet gateway (192.168.128.1 in our case):
VirtualBox Host-only Adapter
Press the "Apply" button when finished. After that, for ease of use, navigate to the "DHCP Server" tab and enable the DHCP server for the range 192.168.128.100 to 192.168.128.200:
VirtualBox Host-only Adapter DHCP
Once again, press "Apply" to commit to the changes. Now we're ready to create a VM using this network.

Configure a VM for running BIND

Now we're ready to create a VM using the network created. For this, select the "New" option in the "Machine" menu:
A wizard pops up to guide us through the definition of a VM. Enter a name for your new VM (we'll chose OL9DNS here), the location where the VM should be created, and the ISO image of the OLE9.2 distribution. As we want to arrange for some specific size, skip the unattended install, and press the "Next" button:
Create VM 1
A VM with 2GB of memory and 1 CPU is more than enough for a DNS server to run; press "Next":
A 20 GB hard disk for storing both the OS and the DNS configuration suffices as welll; press "Next":
End the definition of the VM configuration by pressing the "Finish" button at the summary page:
Now, in order to use our configured network, the VM needs to be altered for this to happen. In the VirtualBox main interface, right-click on the OL9DNS VM just created and select "Settings..." from the popup menu:
Navigate to the "Network" settings and for "Adapter 1" choose "Host-only" attachment type and the virtual network adapter name of the newly created network ("VirtualBox Host-only Ethernet Adapter #3" in our case) as the network:
Next, navigate to the "Adapter 2" tab, enable the adapter, and choose for a bridged network connection to allow us to communicate with the Internet (e.g. for accessing Oracle's YUM repository). When having configured this adapter as well, press "OK" to continue:

Installing OEL9.2 into the VM

So far so good. Now it's time to roll up the sleeves and actually start installing something. First step: the OS which hosts the DNS server. So let's start the newly created VM to initiate the installation process of OEL9.2:
The boot process starts, where we select to skip the medium check and directly commence to the OS installation:
Select the language for the OS to install and continue the installation:
From here we'll present the installation with a particular configuration by providing the necessary information in a particular order:
The following information will be provided:
  1. We need to configure the device on which to install the OEL9.2 release. Press the "Installation Destination" button on the main installation screen, which will bring up the following screen:
    If you don't want to make any adjustments to the default setup, simply press the "Done" button to navigate back to the main installation screen.
  2. Next, press the "Software Selection" button to select what to install. For a DNS server we don't need any full-blown GUI environment. Simply choose "Server" and press the "Done" button:
  3. Next, configure the network by pressing the "Network & Host Name" button:
    Here we need to perform actions in a certain order as well:
    1. First, define the proper host name at the bottom left of the page. You'll see the actual host name being changed on the right when pressing the "Apply" button.
    2. Then select the "enp0s3" adapter. This is the OS adapter associated with the 1st adapter defined for the VM and will hence use our host-only network setup.
    3. After having selected the "enp0s3" adapter, press the "Configure" button to start configuring this adapter. As chosen, the IPv4 address of this server will be 192.168.128.254, and the domain we'll use here reads "mydomain.com". This is all done from the "IPv4" tab:
      Now press the "Routes..." button to force the adapter to only be used for 192.168.128.x traffic:
      Press "OK" to continue with the configuration. We won't be using IPV6, so disable this protocol:
      Then press "Save" to save the network changes for this adapter.
    4. Now press the slider on the top right hand side of the screen to enable this adapter for use.
    5. Similarly, we need to configure the Internet facing adapter (enp0s8). Here we simply select the adapter and enable it without any additional configuration, forcing it to access the bridged network and request a DHCP configuration:
      In our case an IP address in the 192.168.178.x range is selected and the DNS servers are automatically configured through DHCP as well. This adapter will service our default gateway to the external network (e.g. for access to Oracle's YUM Repository).
    Now the network configuration has been provided, so we can press the "Done" button at the top left of the screen.
  4. As a network has been defined, we can set up the date and time characteristics for this VM to use NTP. We only need to select the proper timezone and enable NTP use:
    Press "Done" when finished.
  5. Finally, for ease of maintenance, we'll enable the use of the root account in this VM:
    Enable the root account by entering a valid password, including confirmation of that password. When done, press the "Done" button to move back to the main installation screen.
Back on the main installation screen we now can commence with the actual installation of OEL9.2 by pressing the "Begin installation" button:
A progress window will appear. Once the installation is finished, a reboot is required to wrap up the configuration of the newly installed system:
Finish the installation by making sure that the OS is fully updated with all latest security patches, by logging on to the console as the root account (which we happy enabled) and updating the OS:
# dnf update -y
# reboot

Now the system is ready for the installation and configuration of a DNS server!

Configuring BIND

The last hurdle in the setup of our own DNS server is the installation and configuration of BIND. We will install a default secure BIND server without any change-root support. To install this application, log on to the console of the system as the root account and issue:
# dnf install -y bind bind-utils
Now edit the /etc/named.conf file and change the following entries in the options section of the file:
  • adjust the listening address to include our IP address:
    # sed -r -i -e '/[[:space:]]listen-on[[:space:]]/s/^.*$/ listen-on port 53 { 127.0.0.1; 192.168.128.254; };/' /etc/named.conf
    We won't adjust the IPv6 listening address, as we don't use that protocol.
  • define which IP addresses can query our DNS server:
    # sed -r -i -e '/[[:space:]]allow-query[[:space:]]/s/^.*$/ allow-query { localhost; 192.168.128.0/24; };/' /etc/named.conf
  • define from what IP addresses recursive queries are allowed:
    # sed -r -i -e '/[[:space:]]allow-query[[:space:]]/a \ allow-recursion { localhost; 192.168.128.0/24; };' /etc/named.conf
Let's check what we have done is syntactically correct:
# named-checkconf
Whenever there is an error reported this error should be resolved before continuing the setup. If all's well, then we can bring the BIND server online by opening the relevant firewall ports and enabling the BIND service at the OS level:
# firewall-cmd --permanent --add-service=dns
# firewall-cmd --reload
# systemctl enable --now named
Let's check if we can access our own BIND server:
# dig @localhost www.oracle.com
We should receive an output similar to:
In case no address can be resolved, review the configuration steps carefully.

Now we have configured the DNS listening part, it's up to defining the topology of the network itself. In order to be able to use host names we need to be able to resolve these host names into IP addresses. For this we'll define a forwarder file called /var/named/forward.mydomain.com having the following contents:
$TTL    86400
@               IN SOA  ol9dns.mydomain.com. root.ol9dns.mydomain.com. (
                                      42        ; serial (d. adams)
                                      3H        ; refresh
                                      15M       ; retry
                                      1W        ; expiry
                                      1D )      ; minimum
                IN NS           ol9dns.mydomain.com.
ol9dns          IN A            192.168.128.254
standalone      IN A            192.168.128.10
primary         IN A            192.168.128.11
standby         IN A            192.168.128.12
clus1           IN A            192.168.128.21
clus1-vip       IN A            192.168.128.22
clus2           IN A            192.168.128.23
clus2-vip       IN A            192.168.128.24
clusscan        IN A            192.168.128.25
clusscan        IN A            192.168.128.26
clusscan        IN A            192.168.128.27
In order to have BIND use this file we need to indicate that the file exists. This is done by adding a so-called zone to the BIND zone configuration file:
# cat <<__ZONE__ >> /etc/named.rfc1912.zones

zone "mydomain.com" {
    type master;
    file "forward.mydomain.com";
    allow-query { any; };
    allow-transfer { none; };
};
__ZONE__
In order to check whether BIND can work with it, let's check whether the syntax is OK, and if so, restart BIND to pick up the new configuration:
# named-checkconf
# systemctl restart named
Now let's check whether we can resolve any of our local addresses:
We have lift-off! But wait, this is only one part: from host name to IP address. When we do anything network related we should also be able to resolve an IP address back to a host name. This is where the reverse zone comes into the picture. To define the reverse zone, we create a configuration file /var/named/reverse.mydomain.com with the following contents:
$TTL    86400
@       IN      SOA     ol9dns.mydomain.com. root.ol9dns.mydomain.com.   (
                                      1997022700 ; Serial
                                      28800      ; Refresh
                                      14400      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
        IN      NS      ol9dns.mydomain.com.
254     IN      PTR     ol9dns.mydomain.com.
10      IN      PTR     standalone.mydomain.com.
11      IN      PTR     primary.mydomain.com.
12      IN      PTR     standby.mydomain.com.
21      IN      PTR     clus1.mydomain.com.
22      IN      PTR     clus1-vip.mydomain.com.
23      IN      PTR     clus2.mydomain.com.
24      IN      PTR     clus2-vip.mydomain.com.
25      IN      PTR     clusscan.mydomain.com.
26      IN      PTR     clusscan.mydomain.com.
27      IN      PTR     clusscan.mydomain.com.
Again we need to make BIND aware of the existence of this configuration file by adding a reverse zone:
# cat <<__REVZONE__ >> /etc/named.rfc1912.zones

zone "128.168.192.in-addr.arpa" {
    type master;
    file "reverse.mydomain.com";
    allow-query { any; };
    allow-transfer { none; };
};
__REVZONE__
Again check for syntax and if everything is in order, restart the BIND server to pick up the configuration changes::
# named-checkconf
# systemctl restart named
Then check whether an IP address can be resolved back into a host name:
In the end we now need to adjust the DNS search order for proper resolution of the host names. To do this, you need to make sure our own DNS server is the first to be queried:
# nmcli connection modify enp0s3 +ipv4.dns 192.168.128.254
# nmcli connection modify enp0s3 +ipv4.dns 84.116.46.21
# nmcli connection modify enp0s3 +ipv4.dns 84.116.46.20
# nmcli connection modify enp0s3 ipv4.dns-priority 1
# nmcli connection modify enp0s8 ipv4.dns-priority 2
# nmcli networking off
# nmcli networking on
Note: these last 2 DNS servers are supplied by my ISP. Use the appropriate DNS servers of your ISP, or use e.g. Google's 8.8.8.8 and 8.8.4.4 servers.

Hooray! This leaves us with a functional DNS server on our 192.168.128.x network!