Friday, December 18, 2015

Installing and creating a Linux-based Oracle12c Release 1 12.1.0.2 Real Application Clusters database environment

Table of contents

Introduction
The installation of Grid Infrastrucure
      Setting up the Linux environment for Oracle use and create user accounts
      Configuring Linux & the storage subsystem
      Installing Grid Infrastructure
The installation of Real Application Clusters and creation of a multitenant cluster database
      Installing the Real Application Clusters software
      Creating a multitenant cluster database

Introduction

This blog will introduce you to the installation of the Oracle12c Release 1 (12.1.0.2) Grid Infrastructure (GI) environment, as well as the Real Application Clusters (RAC) software and the creation of a multitenant cluster database.

For a complete screen walk-through of this blog, please navigate to my RAC installation page.

The whole installation of Grid Infrastructure and RAC and the creation of a multitenant cluster database is also conveniently documented in a single document. This document can be obtained from my website.

I would appreciate any feedback and/or questions you have on this. Any questions or feedback can also be sent to mvweb@ziggo.nl.

For the installation and configuration of a Real Application Clusters database the following assumptions are made:
  • The cluster will be comprised of 2 nodes, named clus1 and clus2
  • Oracle Enterprise Linux 7 Update 1 is used on both cluster nodes
  • The ol7_latest and old_UEKR3 channels are configured for use with YUM
  • A default Server with GUI installation of Oracle Enterprise Linux is performed with the restriction that the Secure Linux environment is disabled. No specific packages are installed
  • Each cluster node has 2 network adapters: one for the public network and one for the RAC interconnect
  • The nodes are either registered with the Oracle Unbreakable Linux Network (ULN) or at least with the Oracle public YUM repository (public-yum.oracle.com). A default installation of Oracle Enterprise Linux 7 has the public YUM repository enabled. In case the repository is not configured, issue:
    # cd /etc/yum.repos.d
    # wget http://public-yum.oracle.com/public-yum-ol7.repo

    Make sure this is configured on both nodes.
  • The following IP address information will be used for the various RAC interfaces:
    IP addressHost nameAlias
    192.168.128.101clus1.ztm.nlclus1
    192.168.128.111clus1-vip.ztm.nlclus1-vip
    192.168.128.102clus2.ztm.nlclus2
    192.168.128.112clus2-vip.ztm.nlclus2-vip
    192.168.128.121clus-scan.ztm.nlclus-scan
    192.168.128.122clus-scan.ztm.nlclus-scan
    192.168.128.123clus-scan.ztm.nlclus-scan
    10.1.1.101clus1-priv.ztm.nlclus1-priv
    10.1.1.102clus2-priv.ztm.nlclus2-priv

    The last 2 IP addresses should be configured in the /etc/hosts file on both nodes and not the DNS server to prevent any hostname/IP address resolution from being impacted when DNS lookups are delayed:
    # cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.1.1.101  clus1-priv.ztm.nl clus1-priv
    10.1.1.102  clus2-priv.ztm.nl clus2-priv

    The RAC interconnect interfaces have to be fast-performing and reliable interfaces.
  • Another 2 network interfaces (one on each node) are used for connection to a different network (on the 192.168.178.x subnet). They won't be used for the setup of GI
  • The DNS server being used is set up using the The Do-It-Yourself DNS server setup blog
  • Role separation between the Grid Infrastructure and Oracle RDBMS environments will be implemented, in that the GI account will only be used to manage the GI environment, and the Oracle RDBMS account only the RDBMS environment
  • The GI environment will use the /u01/app/12.1.0/grid and /u01/app/grid directories and the RDBMS installation will use the /u01/app/oracle directory
  • Grid Naming Service (GNS) will not be configured, leaving name resolution to the DNS server
  • Automatic Storage Management (ASM) will be used in the Oracle environment to store CRS and database related files
  • Linux' Network Time Protocol (NTP) daemon will not be used. Instead, Oracle's Cluster Time Synchronization Service will be used
  • The Oracle software staging area (containing Oracle12c 12.1.0.2 RDBMS and GI software) is to be found on node clus1 in the /stage directory. This directory is accessible by the grid and oracle accounts:
    [grid@clus1 ~]$ cd /stage
    [grid@clus1 stage]$ ls -l
    total 4
    drwxr-xr-x. 7 oracle oinstall  110 Jul  7  2014 database
    drwxr-xr-x. 7 oracle oinstall 4096 Jul  7  2014 grid
    [grid@clus1 stage]$ cd grid
    [grid@clus1 grid]$ ls
    install   rpm           runInstaller  stage
    response  runcluvfy.sh  sshsetup      welcome.html
  • A total of 3 shared disks will be used for the cluster:
    OS deviceDisk sizePurpose
    /dev/xvdb530GBShared disk 1 (for use by both GI and RDBMS)
    /dev/xvdb630GBShared disk 2 (for use by both GI and RDBMS)
    /dev/xvdb730GBShared disk 3 (for use by both GI and RDBMS)
    For this a single shared disk of 90 GB is used which is partitioned to provide the above partition sizes.

The installation of Grid Infrastrucure


Grid Infrastructure is Oracle's implementation of software which allows managing a cluster and its resources. The Grid Infrastructure is responsible for monitoring how the individual cluster members behave, and in case any problems are detected, to move services provided on the cluster member to other members of the cluster to provide continuity for business access to those services.

Installation of Grid Infrastructure is only possible when the OS has been properly configured. The following sections show the various steps in preparing the OS for installation of Grid Infrastructure as well as the actual installation and configuration of the Grid Infrastructure software itself.

Setting up the Linux environment for Oracle use and create user accounts


The first step in the Grid Infrastructure installation is to make sure that the various user accounts for an Oracle installation are created on each of the cluster nodes and that the Linux environment is properly configured for supporting Oracle. This latter includes setting appropriate kernel settings, granting proper privileges and quotas to user accounts, etc..

Note: as no explicit user and group IDs are provided the whole setup assumes that on all nodes in the cluster the same user and group IDs will be created (which should be the case when all nodes hae freshly been installed). To make use of explicit user and group IDs, use the "-u" flag with useradd and the "-g" flag with groupadd to specify the necessary IDs.


Oracle provides a RPM for setting up the oracle account. This RPM greatly simplifies the setup of user accounts for use with an Oracle RAC and RDBMS environment. The RPM can be installed with:
# yum install oracle-rdbms-server-12cR1-preinstall

This will create the oracle account as well as the dba and oinstall groups. This RPM also sets the appropriate kernel settings (in /etc/sysctl.conf) and any PAM user limits (in /etc/security/limits.conf). For a typical Oracle environment, installing the oracle-rdbms-server-12cR1-preinstall package is sufficient. For more complex environments role separation is necessary. In this blog role separation will be implemented, by using the grid user for the GI environment and the oracle user for the RDBMS environment. Next to the separation of user accounts, also different OS groups are implemented to separate privileges in the Oracle environments. The following OS groups will be set up:
Oracle privilege groupOS group created
Installation groupoinstall
OSDBA groupdba
OSOPER groupoper
OSASM groupasmadmin
OSDBA for ASM groupasmdba
OSOPER for ASM groupasmoper
OSBACKUPDBA groupbackupdba
OSDGDBA groupdgdba
OSKMDBA groupkmdba

With the oracle-rdbms-server-12cR1-preinstall RPM installation the oracle account and dba and oinstall groups are already created. So the following actions need to be taken to create the rest of the users and groups:
# groupadd oper
# groupadd asmadmin
# groupadd asmdba
# groupadd asmoper
# groupadd backupdba
# groupadd dgdba
# groupadd kmdba
# usermod -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba oracle
# passwd oracle
# useradd -g oinstall -G asmadmin,asmdba,asmoper,dba grid
# passwd grid

The next step is to create the necessary directories for use with the GI and RDBMS installations:
# mkdir -p /u01/app/12.1.0/grid
# mkdir -p /u01/app/grid
# chown -R grid:oinstall /u01
# mkdir -p /u01/app/oracle
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01

Mark that the default directories as also outlined in the Grid Infrastructure Installation Guide are used.

As indicated, run these commands on each of the cluster nodes.

Configuring Linux & the storage subsystem


The following actions have to be performed on all nodes which will be part of the cluster. Failure in doing so will prevent the GI from being installed.


After having created the proper accounts and directories, the next step is to prepare for the GI installation itself. Oracle needs some OS packages installed and configured for use with a shared storage infrastructure. Also, as we'll be using Oracle's CTSS, the NTP daemon needs to be disabled. To disable NTP, issue:
# timedatectl set-ntp false
# systemctl stop ntpdate.service
# systemctl disable ntpdate.service
# systemctl stop ntpd.service
# systemctl disable ntpd.service
# systemctl stop chronyd.service
# systemctl disable chronyd.service
# systemctl daemon-reload
# mv /etc/ntp.conf /etc/ntp.conf.org
# rm -f /var/run/ntpd.pid

Contrary to the 11g GI installation, the cvuqdisk package doesn't have to be installed upfront. At the appropriate time the installation of GI will take care of this.

After this, the ASMLib packages will be installed for ease of disk maintenance for the ASM environment. The ASM kernel driver is supplied with the Unbreakable Enterprise Kernel being used, so only the oracleasm-support and oracleasmlib packages should be downloaded from http://www.oracle.com/technetwork/server-storage/linux/asmlib/ol7-2352094.html and the public YUM Repository and installed:
# yum install oracleasm-support
# rpm -iv oracleasmlib-2.0.12-1.el7.x86_64.rpm

Please note that the version numbers of these packages can change, as they are regularly updated.

For best performance for Oracle ASM the deadline I/O scheduler should be used for the devices used for ASM:
# echo deadline > /sys/block/xvdb/queue/scheduler

After installation of the required RPMs ASMLib needs to be configured:
# oracleasm configure -i

Provide the following answers:
Default user to own the driver interface []: grid
Default group to own the driver interface []: oinstall
Scan for Oracle ASM disks on boot (y/n) [y]: y

The configuration of ASMLib should be performed on all nodes in the cluster.

Make sure that logical block sizes of 512 bytes are used with ASM on each node to prevent the INS-20802 error (as reported in bugs 20033480 and 16870214) from being reported when the Grid Management Repository is being created:
# oracleasm configure -b
# oracleasm exit
# oracleasm init

After this, configure the disk partitions for use with ASM on the node clus1:
# oracleasm createdisk CLUS1 /dev/xvdb5
# oracleasm createdisk CLUS2 /dev/xvdb6
# oracleasm createdisk CLUS3 /dev/xvdb7

Then, on the other nodes of the cluster, issue:
# oracleasm scandisks
# oracleasm listdisks

As we don't use NAS or NFS devices no Name Service Caching Daemon needs to be installed.

Now disable the firewall to prevent the following errors from being raised during the installation:
PRVG-11850 : The system call "connect" failed with error "113" while executing exectask on node "clus2"
No route to host
PRVG-11850 : The system call "connect" failed with error "113" while executing exectask on node "clus1"
No route to host

The firewall can be disabled with:
# systemctl mask firewalld
# systemctl daemon-reload

Furthermore, the avahi daemon should be disabled:
# systemctl stop avahi-daemon
# systemctl disable avahi-daemon
# systemctl daemon-reload

Now we're ready for the Grid Infrastructure software installation. This is done by logging on to the system as the grid user, navigating to the grid directory of the staging area and starting the Universal Installer:
$ cd /stage/grid
$ ./runInstaller


Installing Grid Infrastructure


We will configure Grid Infrastructure for use with a cluster environment, so select the "Install and Configure Oracle Grid Infrstructure for a Cluster" option in the Select Installation Option screen.

A so-called standard cluster will be created, so select the "Configure a Standard cluster" option in the Select Cluster Type screen. For a description of the functionality related to a Flex Cluster please consult the Oracle documentation.

To have the most control over how GI is configured, select the "Advanced installation" method in the Select Installation Type screen.

For sake of simplicity only English will be selected as the default language for the GI environment. For this, the defaults as shown in the Select Product Languages screen can be used.

Now the cluster identity has to be supplied in the Grid Plug and Play Information screen. The cluster will be called clus and the SCAN name to be provided must match the one configured in DNS (which is clus-scan.ztm.nl as outlined above). For the SCAN port, leave the default value (1521) so that no client TNS configuration files have to be adjusted other than for the SCAN name. As indicated, no GNS will be configured, so deselect the Configure GNS checkbox.

On the Cluster Node Information page the cluster member names need to be provided. Press the Add... button and supply the node information related to the clus2 cluster member (both the public hostname - clus2.ztm.nl as well as the VIP hostname - clus2-vip.ztm.nl have to be provided). After that, press the SSH connectivity... button to set up user equivalence for the grid user. Provide the password for the user and press the Setup button. When finished, press Next.

On the Specify Network Interface Usage page, for the network interfaces to be used select the 192.168.128.x network as Public cluster network (so all nodes on this ethernet segment can access the cluster) and the 10.1.1.x network as Private network for use of the cluster interconnect. For the third network (192.168.178.x) leave the default of Do Not Use.

Then on the Storage Option Information page select that the cluster will use a Standard ASM setup for storing its Cluster Registry.

Next, on the Create ASM Disk Group page create an ASM disk group which is large enough to store both the GI information (OCR and management repository) as well as the cluster database. For this, select the disks created earlier with the ASMLib configuration (the CLUS1, CLUS2, and CLUS3 disks, each 30 GB in size) and provide a disk group name CLUS as well as Normal for the ASM redudancy.

When asked for on the Specify ASM Password, provide the account details for both the SYS and ASMSNMP accounts of the ASM instance. Here you can choose to have separate passwords or to have the same password for both accounts.

We're not implementing IPMI, so on the Failure Isolation Support page select Do not use Intelligent Platform Management Interface (IPMI).

As there is no Cloud Control configured (yet) the Enterprise Manager configuration will be skipped, make sure that on the Specify Management Options page the Register with Enterprise Manager (EM) Cloud Control is NOT selected.

Then the privileged OS groups need to be defined in the Privileged Operating System Groups screen. As all groups are already configured properly there is nothing to be selected.

Then the GI software location and Oracle base have to be provided on the Specify Installation Location page. Make sure that the software location points to the already created directory /u01/app/12.1.0/grid.

Next in line, the central inventory location of the Universal Installer is being asked for on the Create Inventory page. Simply use the default proposed location.

During the installation one can select to have the privileged scripts (to be run under the root account) automatically run under a privileged account, as prompted for on the Root script execution configuration page. For the sake of simplicity I've chosen to run the scripts manually when prompted for, so make sure the Automatically run configuration scripts option is deselected.

The Universal Installer then performs checks on whether the system meets the requirements as visible on the Perform Prerequisite Checks screen.

After the checks the actual installation can proceed. Press the Install button on the Summary page to commence the actual software installation and configuration.

The Universal Installer then starts to install the software on both cluster nodes as well as performs the configuration of the GI and shows the installation & configuration output in the Install Product screen.

After installation of the software, you're prompted for the run of privileged scripts by means of the Execute Configuration scripts popup window. Run the scripts in the given order first on node clus1 and then on node clus2 and make sure that you're logged in as the root account for this.

The privileged scripts have to be run from within a terminal window as the root account. When everything goes well, the output of the /u01/app/12.1.0/grid/root.sh script should report CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded message on both nodes.

After having run the scripts on both nodes, press the OK button which allows the installation to finish configuring the environment. This leads to the Finish page where the installation then can be closed after successful completion.

Voila! The Grid Infrastructure 12.1.0.2 has been installed on a 2-node cluster! Now we'll get to phase 2: the installation of the Real Application Clusters software and the actual database creation.

The installation of Real Application Clusters and creation of a multitenant cluster database

Now that Grid Infrastructure is up and running the next step in the creation of a cluster database can be performed: the installation of the Oracle12c Release 1 (12.1.0.2) Real Application Clusters (RAC) environment.

For this installation the following assumptions are made:
  • The cluster is comprised of 2 nodes, named clus1 and clus2
  • The Grid Infrastructure 12.1.0.2 has been installed on the cluster using the steps described in the previous chapter of this blog
  • The RAC environment will use the /u01/app/oracle/product/12.1.0/dbhome_1 directory
  • The Oracle software staging area (containing Oracle12c 12.1.0.2 RDBMS software) is to be found on node clus1 in the /stage directory. This directory is accessible by the oracle account:
    [oracle@clus1 ~]$ cd /stage
    [oracle@clus1 stage]$ ls -l
    total 4
    drwxr-xr-x. 7 oracle oinstall  110 Jul  7  2014 database
    drwxr-xr-x. 7 oracle oinstall 4096 Jul  7  2014 grid
    [oracle@clus1 stage]$ cd database
    [oracle@clus1 database]$ ls
    install  response  rpm  runInstaller  sshsetup  stage  welcome.html
  • The database will be created on an ASM storage facility in a disk group named CLUS
  • A multitenant container database named cluscdb will be created, as well as a pluggable database cluspdb within this container
  • First the Real Application Clusters software will be installed, and in a second phase the Database Configuration Assistant will be used to create the cluster database

Installing the Real Application Clusters software


Installing a Real Application Clusters (or RAC) cluster database consists of 2 steps, of which the Grid Infrastructure (GI) installation is the most tedious one. The second step, the RAC software installation and the cluster database creation, is a relatively straightforward process.

As the GI has already been installed (see my previous blog for this) we're ready for the RAC software installation. This is done by logging on to the system as the oracle user, navigating to the database directory of the staging area and starting the Universal Installer:
$ cd /stage/database
$ ./runInstaller

We are currently not interested in any security update notifications for this software setup, so the checkbox needs to be deselected on the Configure Security Options screen.

This prompts for a confirmation that this is indeed what is to be done with the My Oracle Support Username/Email Address Not Specified popup window. Press the Yes button to continue.

Then a selection is made to only install the RAC software (as the cluster database will be created separately) on the Select Installation Option screen. Choose the Install database software only option for this.

The next step is to indicate that a RAC setup needs to be performed. This is done by selecting the Oracle Real Application Clusters database installation option on the Grid Installation Options page. Please consult the Oracle Real Application Clusters documentation on the differences between RAC and RAC One Node.

On the Select List of Nodes page the cluster member names need to be provided. In case clus2 is not selected, select the node. After that, press the SSH connectivity... button to set up user equivalence for the oracle user. Provide the password for the user and press the Setup button. When finished, press Next.

For sake of simplicity only English will be selected as the default language for the RAC environment on the Select Product Languages page.

Then - in case the staging area allows this - a selection for the type of Oracle RDBMS software installation will be offered on the Select Database Edition screen. As we only have the Enterprise Edition available, this is the only selection possible.

Choose where the RAC software is to be installed on the Specify Installation Location page. For the Oracle Base the same default directory as with the GI installation is provided for access to the central inventory of the Universal Installer as well as the Automatic Diagnostics Directory (ADR). The default Software location option can be used as well (as this is the directory where we decided to install the RAC software).

Then the privileged OS groups need to be defined on the Privileged Operating System groups screen. As all groups are already configured properly there is nothing to be selected.

The Universal Installer then presents the Perform Prerequisite Checks page and performs checks on whether the system meets the requirements.

After the checks the actual installation can proceed. The Universal Installer presents the Summary page with a summary of the selected installation options. Press the Install button to commence the installation and configuration.

The Universal Installer then presents the Install Product screen and starts to install the software on both cluster nodes as well as performs the configuration of the RAC software.

After installation of the software, you're prompted for the run of privileged scripts by means of the Execute Configuration Scripts popup window. Run the scripts in the given order first on node clus1 and then on node clus2 and make sure that you're logged in as the root account for this.

After having run the scripts on both nodes, press the OK button which allows the installation to present the Finish screen to finish configuring the environment.

Voila! The Real Application Clusters 12.1.0.2 software has been installed on a 2-node cluster! Next stop: the creation of a cluster database.

Creating a multitenant cluster database


The creation of a cluster database can best be performed with the Database Configuration Assistant (DBCA). DBCA takes care of creating the database as well as registering the database instances in the Oracle Cluster Registry. A database can be created manually as well, but is more error prone due to the various creation and registration commands necessary to perform the same actions. For the sake of simplicity the database will be created with DBCA:

[oracle@clus1 product]$ cd /u01/app/oracle/product/12.1.0/dbhome_1
[oracle@clus1 dbhome_1]$ export ORACLE_HOME=`pwd`
[oracle@clus1 dbhome_1]$ PATH=$ORACLE_HOME/bin:$PATH
[oracle@clus1 dbhome_1]$ dbca

On the first DBCA screen - named Database Operation - select that a new database has to be created.

To have the most control over how the database is configured, select an advanced creation method on the Creation Mode screen.

The third screen (Database Template) prompts for what kind of database is to be created. Select Oracle Real Application Clusters (RAC) database when this is not already selected. For the configuration type select Admin-Managed. For the difference between a policy managed environment and an admin managed environment please consult the Real Applications Cluster documentation. Also select the General Purpose or Transaction Processing database type.

On the Database Identification screen provide the cluster database name (which is cluscdb in our case), specify that it is to be a multitenant database by selecting the Create As Container Database checkbox, and create a pluggable database (with the name cluspdb) as well.

On the Database Placement page select both nodes in the cluster and press Next.

Let the Cluster Verification Utility run regularly to check on the health of the cluster by selecting this option on the Management Options page. As no Enterprise Manager Cloud Control has been configured (yet) and no Enterprise Manager Database Express is requested, make sure that both options are deselected.

When asked for on the Database Credentials page, provide the account details for the SYS, SYSTEM, PDBADMIN, and DBSNMP accounts of the multitenant database instance. Here you can choose to have separate passwords or to have the same password for both accounts (which is done for this installation attempt).

On the Storage Location page use the already created CLUS ASM disk group (which was created during the GI installation) for storing the database files. To save space in this demo setup the Recovery Area is disabled.

In the Database Options screen select whether you wish to install the sample schemas and Database Vault/Label Security (for which you need a separate license). For the sake of simplicity nothing is selected.

At the Initialization Parameters screen select how large the instance is allowed to become. In this case we select Automatic Memory Management with a MEMORY_TARGET of 1GB.

For this example database I select the AL32UTF8 database character set on the Character Sets tab of the same screen.

At the Create Options screen select the option to actually create the database.

DBCA then shows Prerequisite Checks screen and performs checks on whether the system meets the requirements for creating a database.

A Summary is given on what type of database is to be created. The actual creation commences when pressing the Finish button.

DBCA then starts to create the database and registers the database in the Oracle Cluster Registry. The progress of these operations can be monitored on the Progress Page screen.

If everything went fine, DBCA finishes with a summary on what actions have been taken to create the database by displaying the Finish screen.

To verify that the instances of the container database are running, query GV$INSTANCE from within e.g. SQL*Plus:

[oracle@clus1 database]$ . oraenv
ORACLE_SID = [oracle] ? cluscdb
The Oracle base has been set to /u01/app/oracle
[oracle@clus1 database]$ ORACLE_SID=cluscdb1
[oracle@clus1 database]$ sqlplus "/ as sysdba"

SQL*Plus: Release 12.1.0.2.0 Production on Mon Dec 14 15:39:56 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64-bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Advanced Analytics and Real Application Testing options

SQL> set pages 9999
SQL> select inst_id, instance_number, instance_name,
  2         host_name, version, startup_time, thread#,
  3         logins, database_status, instance_role,
  4         active_state, edition
  5  from gv$instance;

   INST_ID INSTANCE_NUMBER INSTANCE_NAME
---------- --------------- ---------------
HOST_NAME
----------------------------------------------------------------
VERSION           STARTUP_T    THREAD# LOGINS     DATABASE_STATUS
----------------- --------- ---------- ---------- -----------------
INSTANCE_ROLE      ACTIVE_ST EDITION
------------------ --------- -------
         1               1 cluscdb1
clus1.ztm.nl
12.1.0.2.0        14-DEC-15          1 ALLOWED    ACTIVE
PRIMARY_INSTANCE   NORMAL    EE

         2               2 cluscdb2
clus2.ztm.nl
12.1.0.2.0        14-DEC-15          2 ALLOWED    ACTIVE
PRIMARY_INSTANCE   NORMAL    EE


At this step we have a fully operational cluster database!