Saturday 30 July 2016

Important srvctl and crsctl commands

Displays the configuration for the SCAN VIP.

[root@rac2 Desktop]# srvctl config scan -i 1
SCAN name: rac-scan, Network: 1/192.192.2.0/255.255.255.0/eth4
SCAN VIP name: scan1, IP: /rac-scan/192.192.2.51


Displays the configuration information for a VIP or VIP's configured on a node.

[root@rac2 Desktop]# srvctl config vip  -n rac2
VIP exists: /rac2-vip/192.192.2.202/192.192.2.0/255.255.255.0/eth4, hosting node rac2

Displays configuration information for the SCAN listener.

[root@rac2 Desktop]# srvctl config scan_listener -i 1
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

[root@rac2 Desktop]# srvctl config nodeapps
Network exists: 1/192.192.2.0/255.255.255.0/eth4, type static
VIP exists: /rac1-vip/192.192.2.201/192.192.2.0/255.255.255.0/eth4, hosting node rac1
VIP exists: /rac2-vip/192.192.2.202/192.192.2.0/255.255.255.0/eth4, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

Temporarily relocates service from one node of the cluster to another.
 mandatory options are either -i, -t or -c, -n, but not both.

[root@rac2 Desktop]# srvctl relocate service -d orcl -s taf_service  -i orcl1 -t orcl2, orcl3


Display OHAS autostart config on this server
[root@rac2 Desktop]# crsctl disable crs
     Disable OHAS autostart on this server

  crsctl query crs activeversion
     Lists the Oracle Clusterware operating version
  crsctl query crs releaseversion
     Lists the Oracle Clusterware release version
  crsctl query crs softwareversion [<nodename>| -all]
     Lists the version of Oracle Clusterware software installed

Find public and private network ip addresses

[root@rac2 Desktop]# oifcfg getif
eth5  10.10.2.0  global  cluster_interconnect
eth4  192.192.2.0  global  public

check $CRS_HOME/cdata/<cluster_name> directory 

Friday 29 July 2016

Dba_jobs view

DESC DBA_JOBS
----------------------
JOB
SEHEMA_USER
LAST_DATE
THIS_DATE
NEXT_DATE
WHAT     --> Body of the anonymous PL/SQL block that the job executes
 

Bringing Down a DBMS_JOB
1. Find the Job You Want to Bring Down
In order to do anything you first need to find the job that is giving you a headache. Go ahead and run the running_jobs.sql. This will give you the prime information, job, sid, serial#, and spid, for the following actions in bringing down the job.
2. Mark the DBMS_JOB as Broken
Use the following command for the job that you have to deal with.
SQL> EXEC DBMS_JOB.BROKEN(job#,TRUE);
All this command does is mark the job so that if we get it to stop, it won't start again. Let's make one thing perfectly clear, after executing this command the job is still running.
As a side note, if you are trying to shut down a database with jobs that run throughout the day, they may hinder your attempts to bring down the database cleanly. This is a wonderful command to make sure no jobs are executing during the shutdown process. Just be aware that you will need to mark the jobs as unbroken when the database comes back up, more on that later.
3. Kill the Oracle Session
Since the job is still running and it isn't going to end soon, you will need to kill the Oracle session that is executing the job. Use the following command for to kill the job.
ALTER SYSTEM KILL SESSION 'sid,serial#';
4. Kill the O/S Process
More often than not the previous step will still leave the job attached to the database and still running. When this happens you will need to go out to the operating system level and get rid of the process that has spawned from the running job. In order to do this you must login to the database box and issue the following command, depending on the type of operating system you have.
For Windows, at the DOS Prompt: orakill sid spid
For UNIX at the command line> kill '9 spid
The orakill is an Oracle command, while kill is a Unix command.
5. Check if the Job is Still Running
Re-run the session_jobs.sql script to see if you have gotten rid of the job. If you have there is no reason to go further. Usually steps 1 through 4 will be sufficient to get rid of a job but when the job is running wild you will have to continue with steps 6 through 11 which describes a process for bouncing the job queue process.
6. Determine the Current Number of Job Queue Processes
SQL> col value for a10
SQL> select name,value from v$parameter where name = 'job_queue_processes';
7. Alter the Job Queue to Zero
SQL> ALTER SYSTEM SET job_queue_processes = 0;
This will bring down the entire job queue processes.
8. Validate that No Processes are Using the Job Queue
Re-run the session_jobs.sql script to see if any jobs are still running. Since we have given a hard stop to the job queue and issued the kill commands, you can now wait until no more jobs are running. After all the jobs have quit running, you can do whatever maintenance or tuning you need to do before proceeding.
9. Mark the DBMS_JOB as Not Broken
You can now reset the broken job to not broken so they can run again. Just issue the command.
SQL>EXEC DBMS_JOB.BROKEN(job#,FALSE):
10. Alter the Job Queue to Original Value
Set the job queue to its' original value so that the jobs can run again.
ALTER SYSTEM SET job_queue_processes = original_value;
11. Validate that DBMS_JOB Is Running
To make sure everything is back to normal, re-run the above scripts to validate that jobs are scheduled, not broken, and are executing with the next and last dates columns changing.
 
 
Check
dba_seheduler_jobs
dba_seheduler_jobs_running
DBA_jobs
DBA_jobs_running
v$session_longops

 
You need to identify the session of the job in v$session.
In v$session there is much more information.
Amongst others the sql_id of the statement it is executing
V$sql_plan will show the plan of that sql.
V$session_wait will show what it is waiting for
v$sess_io will show whether any IO is going on
v$session_longops will show what long operations are going on and how far they are.
 
 
1-the alert_log for this instance;
2- select * from dba_objects;
3- either windows or unix check what other jobs are schuelded;
4- you will need to use stats pack for thissituation.
so take astatspack before and while running this job.
5- show your sp code.
6- an Xplain plan for the select statement.
7-invoke the job and check locks with

Thursday 21 July 2016

Adding a Service to cluster using SRVCTL

Services in Oracle Database 10g::

In Real Application Cluster (RAC) environments it is sometimes desirable to run applications on a subset of RAC nodes, or have preferred nodes for specific applications. In Oracle 10g this is accomplished using services.
•Cluster Configuration
•Service Creation
•Jobs and Services
•Connections and Services

Cluster Configuration :
Before using services, you must check the cluster configuration is correct. The following command and output show the expected configuration for a three node database called ORCL.

# srvctl config database -d ORCL
server01 ORCL1 /u01/app/oracle/product/10.1.0/db_1
server02 ORCL2 /u01/app/oracle/product/10.1.0/db_1
server03 ORCL3 /u01/app/oracle/product/10.1.0/db_1

Service Creation :
The srvctl utility, dbca utility and DBMS_SERVICES package can all be used to create and modify services, but for this article we will restrict ourselves to looking at the srvctl utility. Let's assume we have two applications that should run in the following way.
•OLTP - Should run on nodes 1 and 2 of the RAC, but is able to run on node 3 if nodes 1 and 2 are not available.
•BATCH - Should run on node 3, but is able to run on nodes 1 and 2 if node 3 is not available.

To meet this requirement we might create the following services.
Set environment :
# export ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1
# export PATH=$ORACLE_HOME/bin:$PATH

Create services :
# srvctl add service -d ORCL -s OLTP_SERVICE -r ORCL1,ORCL2 -a ORCL3
# srvctl add service -d ORCL -s BATCH_SERVICE -r ORCL3 -a ORCL1,ORCL2

The OLTP_SERVICE is able to run on all RAC nodes because ORCL3 is present in the available list, but will run in preference on nodes 1 and 2 (indicated by the -r option).

The BATCH_SERVICE is able to run on all RAC nodes because ORCL1 and ORCL2 are in the available list, but will run in preference on node 3 (indicated by the -r option).

The services can be started and stopped using the following commands.
# srvctl start service -d ORCL -s OLTP_SERVICE
# srvctl start service -d ORCL -s BATCH_SERVICE

Jobs and Services

The Oracle 10g scheduler allows jobs to be linked with job classes, which in turn can be linked to services to allows jobs to run on specific nodes in a RAC environment. To support our requirements we might create two job classes as follows.

-- Create OLTP and BATCH job classes.
BEGIN
  DBMS_SCHEDULER.create_job_class(
    job_class_name => 'OLTP_JOB_CLASS',
    service        => 'OLTP_SERVICE');
  DBMS_SCHEDULER.create_job_class(
    job_class_name => 'BATCH_JOB_CLASS',
    service        => 'BATCH_SERVICE');
END;
/
-- Make sure the relevant users have access to the job classes.
GRANT EXECUTE ON sys.oltp_job_class TO my_user;
GRANT EXECUTE ON sys.batch_job_class TO my_user;
These job classes can then be assigned to existing jobs or during job creation.
-- Create a job associated with a job class.
BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'my_user.oltp_job_test',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN NULL; END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'FREQ=DAILY;',
    job_class       => 'SYS.OLTP_JOB_CLASS',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Job linked to the OLTP_JOB_CLASS.');
END;
/
-- Assign a job class to an existing job.
EXEC DBMS_SCHEDULER.set_attribute('MY_BATCH_JOB', 'JOB_CLASS', 'BATCH_JOB_CLASS');

Connections and Services :

The use of services is not restricted to scheduled jobs. These services can be used in the tnsnames.ora file to influence which nodes are used for each applications. An example of the tnsnames.ora file entries are displayed below.

OLTP =
  (DESCRIPTION =
    (LOAD_BALANCE = ON)
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = server01)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server02)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server03)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = OLTP_SERVICE)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 20)
        (DELAY = 1)
      )
    )
  )

BATCH =
  (DESCRIPTION =
    (LOAD_BALANCE = ON)
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = server01)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server02)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server03)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = BATCH_SERVICE)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 20)
        (DELAY = 1)
      )
    )
  )

Provided applications use the appropriate connection identifier they should only connect to the nodes associated to the service.

Try:

$ sqlplus  system@BATCH_SERVICE

$ sqlplus system@OLTP_SERVICE

Monday 18 July 2016

crsctl stat res -t and crs_stat -t Results


[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
---------------------------------------------------------------------
Local Resources
---------------------------------------------------------------------
ora.BACKUP.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.VOTE.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1       Started
               ONLINE  ONLINE       rac2       Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
-----------------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.orcl.db
      1        OFFLINE OFFLINE
      2        ONLINE  ONLINE       rac1        Open
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#



[root@rac2 ~]# crsctl stat res -t
---------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
---------------------------------------------------------------------
Local Resources
---------------------------------------------------------------------
ora.BACKUP.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.DATA.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.VOTE.dg
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.asm
               ONLINE  ONLINE       rac1       Started
               ONLINE  ONLINE       rac2       Started
ora.gsd
               OFFLINE OFFLINE      rac1
               OFFLINE OFFLINE      rac2
ora.net1.network
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
ora.ons
               ONLINE  ONLINE       rac1
               ONLINE  ONLINE       rac2
---------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.orcl.db
      1        OFFLINE OFFLINE
      2        ONLINE  ONLINE       rac1        Open
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  ONLINE       rac2
ora.scan1.vip
      1        ONLINE  ONLINE       rac2
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac2 ~]#


[root@rac1 ~]# crsctl stat res -t
----------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
----------------------------------------------------------------------
Local Resources
----------------------------------------------------------------------
ora.BACKUP.dg
               ONLINE  ONLINE       rac1
ora.DATA.dg
               ONLINE  ONLINE       rac1
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1
ora.VOTE.dg
               ONLINE  ONLINE       rac1
ora.asm
               ONLINE  ONLINE       rac1      Started
ora.gsd
               OFFLINE OFFLINE      rac1
ora.net1.network
               ONLINE  ONLINE       rac1
ora.ons
               ONLINE  ONLINE       rac1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1
ora.cvu
      1        ONLINE  ONLINE       rac1
ora.oc4j
      1        ONLINE  ONLINE       rac1
ora.orcl.db
      1        OFFLINE OFFLINE
      2        ONLINE  ONLINE       rac1         Open
ora.rac1.vip
      1        ONLINE  ONLINE       rac1
ora.rac2.vip
      1        ONLINE  INTERMEDIATE rac1                                                                   FAILED OVER
ora.scan1.vip
      1        ONLINE  ONLINE       rac1
ora.scan2.vip
      1        ONLINE  ONLINE       rac1
ora.scan3.vip
      1        ONLINE  ONLINE       rac1
[root@rac1 ~]#



[root@rac1 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac1 ~]# crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.BACKUP.dg  ora....up.type ONLINE    ONLINE    rac1
ora.DATA.dg    ora....up.type ONLINE    ONLINE    rac1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N2.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N3.lsnr ora....er.type ONLINE    ONLINE    rac1
ora.VOTE.dg    ora....up.type ONLINE    ONLINE    rac1
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rac1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    rac1
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rac1
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1
ora.orcl.db    ora....se.type ONLINE    ONLINE    rac1
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    OFFLINE   OFFLINE
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    rac1
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    rac1
[root@rac1 ~]#
[root@rac1 ~]# crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.BACKUP.dg  ora....up.type ONLINE    ONLINE    rac1
ora.DATA.dg    ora....up.type ONLINE    ONLINE    rac1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N1.lsnr ora....er.type ONLINE    OFFLINE
ora....N2.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N3.lsnr ora....er.type ONLINE    ONLINE    rac1
ora.VOTE.dg    ora....up.type ONLINE    ONLINE    rac1
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rac1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    rac1
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rac1
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1
ora.orcl.db    ora....se.type ONLINE    ONLINE    rac1
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    OFFLINE   OFFLINE
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    OFFLINE
ora.rac2.gsd   application    OFFLINE   OFFLINE
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   ora....t1.type ONLINE    OFFLINE
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    rac1
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    rac1
[root@rac1 ~]#



[root@rac2 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
[root@rac2 ~]# crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.BACKUP.dg  ora....up.type ONLINE    ONLINE    rac1
ora.DATA.dg    ora....up.type ONLINE    ONLINE    rac1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N2.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N3.lsnr ora....er.type ONLINE    ONLINE    rac1
ora.VOTE.dg    ora....up.type ONLINE    ONLINE    rac1
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rac1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    rac1
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rac1
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1
ora.orcl.db    ora....se.type ONLINE    ONLINE    rac1
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    OFFLINE   OFFLINE
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    OFFLINE
ora.rac2.gsd   application    OFFLINE   OFFLINE
ora.rac2.ons   application    ONLINE    OFFLINE
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    rac1
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    rac1
[root@rac2 ~]#

Sunday 17 July 2016

Clusterware startup process:

Clusterware Stratup :
- When a node of an Oracle Clusterware cluster restarts, OHASD is started by platform-specific means. OHASD accesses OLR (Oracle Local Registry) stored on the local file system to get the data needed to complete OHASD initialization
-  OHASD brings up GPNPD and CSSD. CSSD accesses the GPNP Profile stored on the local file system which contains the following vital bootstrap data;
a.  ASM_DISKSTRING parameter (if specified) to locate the disks on which ASM disks are configured.
b.  ASM SPFILE location : Name of the diskgroup containing ASM spfile
c.  Voting Disk Files location : ASM
– CSSD scans the headers of all ASM disks ( as indicated in ASM_DISKSTRING in GPnP profile) to identify the disk containing the voting file.  Using the pointers in ASM disk headers, the Voting Files locations on ASM Disks are accessed by CSSD and CSSD is able to complete initialization and start or join an existing cluster.
–To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device’s header. OHASD reads the header of the ASM disk containing ASM SPfile (as read from GPnP profile) and using the pointers in disk header, contents of ASM spfile are read. Thereafter, ASM instance is started.
–  With an ASM instance operating and its Diskgroups mounted, access to Clusterware’s OCR is available to CRSD.
–  OHASD starts CRSD with access to the OCR in an ASM Diskgroup.
–  Clusterware completes initialization and brings up other services under its control.
Demonstration :
In my environment, the ASM disk group DATA created with EXTERNAL  redundancy is used exclusively for ASM spfile, voting and OCR files:
- Let us read  gpnp profile to find out the location of ASM SPfile
[grid@host01 peer]$ cd /u01/app/11.2.0/grid/gpnp/host01/profiles/peer
                                  gpnptool getpval -asm_spf
Warning: some command line parameters were defaulted. Resulting command line:
         /u01/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_spf -p=profile.xml -o-
+DATA/cluster01/asmparameterfile/registry.253.793721441
– Let us find out the disks in DATA diskgroup
[grid@host01 peer]$ asmcmd lsdsk -G DATA
Path
ORCL:ASMDISK01
ORCL:ASMDISK010
ORCL:ASMDISK02
ORCL:ASMDISK03
ORCL:ASMDISK04
ORCL:ASMDISK09
– Let us find out which ASM disk maps to which partition
   Note down major/minor device numbers of the disks in DATA diskgroup
[root@host01 ~]# ls -lr /dev/oracleasm/disks/*
brw-rw—- 1 grid asmadmin 8, 26 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK09
brw-rw—- 1 grid asmadmin 8, 25 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK08
brw-rw—- 1 grid asmadmin 8, 24 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK07
brw-rw—- 1 grid asmadmin 8, 23 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK06
brw-rw—- 1 grid asmadmin 8, 22 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK05
brw-rw—- 1 grid asmadmin 8, 21 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK04
brw-rw—- 1 grid asmadmin 8, 19 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK03
brw-rw—- 1 grid asmadmin 8, 18 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK02
brw-rw—- 1 grid asmadmin 8, 31 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK014
brw-rw—- 1 grid asmadmin 8, 30 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK013
brw-rw—- 1 grid asmadmin 8, 29 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK012
brw-rw—- 1 grid asmadmin 8, 28 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK011
brw-rw—- 1 grid asmadmin 8, 27 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK010
brw-rw—- 1 grid asmadmin 8, 17 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK01
- Let us find out the major/minor device numbers of various disk partitions
[root@host01 ~]# ls -lr /dev/sdb*
brw-r—– 1 root disk 8, 25 Nov  8 09:35 /dev/sdb9
brw-r—– 1 root disk 8, 24 Nov  8 09:35 /dev/sdb8
brw-r—– 1 root disk 8, 23 Nov  8 09:35 /dev/sdb7
brw-r—– 1 root disk 8, 22 Nov  8 09:35 /dev/sdb6
brw-r—– 1 root disk 8, 21 Nov  8 09:35 /dev/sdb5
brw-r—– 1 root disk 8, 20 Nov  8 09:35 /dev/sdb4
brw-r—– 1 root disk 8, 19 Nov  8 09:35 /dev/sdb3
brw-r—– 1 root disk 8, 18 Nov  8 09:35 /dev/sdb2
brw-r—– 1 root disk 8, 31 Nov  8 09:35 /dev/sdb15
brw-r—– 1 root disk 8, 30 Nov  8 09:35 /dev/sdb14
brw-r—– 1 root disk 8, 29 Nov  8 09:35 /dev/sdb13
brw-r—– 1 root disk 8, 28 Nov  8 09:35 /dev/sdb12
brw-r—– 1 root disk 8, 27 Nov  8 09:35 /dev/sdb11
brw-r—– 1 root disk 8, 26 Nov  8 09:35 /dev/sdb10
brw-r—– 1 root disk 8, 17 Nov  8 09:35 /dev/sdb1
brw-r—– 1 root disk 8, 16 Nov  8 09:35 /dev/sdb
– Now we can find out the partitions mapping to various ASM disks by matching their
   major/minor device numbers
 ASMDISK01    8,17     /dev/sdb1
 ASMDISK02    8,18     /dev/sdb2
 ASMDISK03    8,19     /dev/sdb3
 ASMDISK04    8,21     /dev/sdb5
 ASMDISK09    8,26     /dev/sdb10
 ASMDISK10    8,27     /dev/sdb11
– Let’s scan the headers of those devices:
[root@host01 ~]#  kfed read /dev/sdb1 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000
[root@host01 ~]#  kfed read /dev/sdb2 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000
[root@host01 ~]#  kfed read /dev/sdb3 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                       16 ; 0x0f4: 0x00000010
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001
In the output above, we see that
     the device /dev/sdb3 contains a copy of the ASM spfile (spfflg=1).
     The ASM spfile location starts at the disk offset of 16 (spfile=16)
Considering the allocation unit size (kfdhdb.ausize = 1M), let’s dump the ASM spfile from the device:
[root@host01 ~]#  dd if=/dev/sdb3 of=spfileASM_Copy2.ora skip=16  bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.170611 seconds, 6.1 MB/s
[root@host01 ~]# strings spfileASM_Copy2.ora
+ASM1.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM2.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM3.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM3.asm_diskgroups=’FRA’#Manual Mount
+ASM2.asm_diskgroups=’FRA’#Manual Mount
+ASM1.asm_diskgroups=’FRA’#Manual Mount
*.asm_power_limit=1
*.diagnostic_dest=’/u01/app/grid’
*.instance_type=’asm’
*.large_pool_size=12M
*.remote_login_passwordfile=’EXCLUSIVE’
a:O/
 The same technique is used to access the Clusterware voting files which are also stored in an ASM disk group. In this case, Clusterware does not need a running ASM instance to access the cluster voting files:
Let’s check the location of voting disk :
[grid@host01 peer]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
 1. ONLINE   243ec3b2a3cf4fbbbfed6f20a1ef4319 (ORCL:ASMDISK01) [DATA]
Located 1 voting disk(s).
– Since above query shows that voting disk is stored on ASMDISK01 which maps to /dev/sdb1,
   we will scan the header of /dev/sdb1
[root@host01 ~]#  kfed read /dev/sdb1 | grep vf
kfdhdb.vfstart:                      96 ; 0x0ec: 0x00000060
kfdhdb.vfend:                       128 ; 0x0f0: 0x00000080
Here we can see that voting disk resides on /dev/sdb1 .
Once the voting disk is accessible and ASM is started using the SPfile read above, rest of the resources on the node can be started after reading the Oracle Local Registry (OLR) on the node.

Convert NON RAC Database to RAC Database using RCONFIG

Oracle RAC 11G :

Oracle supports the following methods to convert a single-instance database to an RAC database as long as the RAC and the standalone environments are running on the same operating system and using the same oracle release:
• DBCA
• Oracle Enterprise Manager (grid control)
• RCONFIG
• Manual method
  
During the conversion, rconfig performs the following steps automatically:
• Migrating the database to ASM, if specified
• Creating RAC database instances on all specified nodes in the cluster
• Configuring the Listener and NetService entries
• Registering services with CRS
• Starting up the instances and listener on all nodes
 
In Oracle 11g R2., a single-instance database can either be converted to an administrator-managed cluster database or a policy-managed cluster database.

When you navigate through the $ORACLE_HOME/assistants/rconfig/sampleXMLS,
you will find two sample XML input files.
- ConvertToRAC_AdminManaged.xml
- ConvertToRAC_PolicyManaged.xml
 
While converting a single-instance database, with filesystem storage, to an RAC database with Automatic Storage Management (ASM), rconfig invokes RMAN internally to back up the database to proceed with converting non-ASM to ASM. 
Therefore, configuring parallel options to use multiple RMAN channels. In the RMAN on the local node may make backup run faster, which eventually reduces the conversion duration.
 
For example, you may configure the following in the RMAN  settings of orcl database on the local node.

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6;
CURRENT SCENARIO:
- 3 node RAC setup
- Names of nodes : Host01, Host02, Host03
- Name of single instance database with filesystem storage : orcl
- Source Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
- Target Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1

OBJECTIVE
- convert orcl to a Admin managed RAC database running on two nodes host01 and host02.
- change storage to ASM with
   . Datafiles on +DATA diskgroup
   . Flash recovery area on +FRA diskgroup
 
IMPLEMENTATION:
– copy ConvertToRAC_AdminManaged.xml to another file my.xml
host01$cd $ORACLE_HOME/assistants/rconfig/ sampleXMLs
host01$cp ConvertToRAC_AdminManaged.xml my.xml
 
– Edit my.xml and make following changes :
   . Specify current OracleHome of non-rac database for SourceDBHome
   . Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
   . Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
   . Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist.
   . Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name
   . Specify the type of storage to be used by rac database. Allowable values are CFS|ASM
   . Specify Database Area Location to be configured for rac database.
   . Specify Flash Recovery Area to be configured for rac database.


Note:
3 Options - YES/NO/ONLY
Database must be running.

[oracle@rac1 sampleXMLs]$ rconfig ctora.xml
<?xml version="1.0" ?>
<RConfig version="1.1" >
<ConvertToRAC>
    <Convert>
      <Response>
        <Result code="0" >
          Operation Succeeded
        </Result>
      </Response>
      <ReturnValue type="object">
There is no return value for this step     </ReturnValue>
    </Convert>
  </ConvertToRAC></RConfig>

– Run rconfig to convert orcl from single instance database to 2 instance RAC database
host01$rconfig my.xml
 
– Check the log file for rconfig while conversion is going on
oracle@host01$ls -lrt  $ORACLE_BASE/cfgtoollogs/rconfig/*.log
$tail -f
 
– check that the database has been converted successfully
host01$srvctl status database -d orcl
Instance orcl1 is running on node host01
Instance orcl2 is running on node host02
 
– Note that rconfig adds password file to all the nodes but  entry to   tnsnames.ora needs to be modified (to reflect scan name instead of host-ip) on the local node and added to rest of the nodes.
 
– For all other nodes, copy the entry for the database (orcl) from    tnsnames.ora on local node to tnsnames.ora on remote node(s).
 
– Following is the entry I modified on the local node and copied to rest of the nodes :
ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl)
    )
  )
 
– check that database can be connected remotely from remote node.
host02$sqlplus system/oracle@orcl
 
– check that datafiles have converted to ASM
SQL>select name from v$datafile;
NAME
——————————————————————————–
+DATA/orcl/datafile/system.326.794838279
+DATA/orcl/datafile/sysaux.325.794838349
+DATA/orcl/datafile/undotbs1.305.794838405
+DATA/orcl/datafile/users.342.794838413
+DATA/orcl/datafile/undotbs2.348.794838
——————————————————————————




Saturday 16 July 2016

Friday 15 July 2016

Instance eviction

Today one of the instances rebooted and the logs extract is below:

Thu May 07 12:12:51 2009
LMON (ospid: 761896) waits for event 'control file sequential read' for 83 secs.
Thu May 07 12:13:21 2009
LMON (ospid: 761896) waits for event 'control file sequential read' for 113 secs.
ERROR: LMON is not healthy and has no heartbeat.
ERROR: LMD0 (ospid: 405742) is terminating the instance.
LMD0 (ospid: 405742): terminating the instance due to error 482

I don't understand why it rebooted, though the logs above seem to suggest the instance was unable to read the controlfile.

My question is why didnt the whole node reboot?

So, what happened is: the DB checks ("pings") the control files and expects an answer from the control files within a certain amount of time. If the control file cannot be accessed within this time, then the instances assume a problem in the communication with the the storage / control file and evict the instance in order to prevent data corruption.

Restore OCR and Voting Disk in case of correption

Automatic backups: Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three backup copies of OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of files that Oracle Database retains.

Procedure to restore OCR on Linux or UNIX systems:

1.  List the nodes in your cluster by running the following command on one node:
$ olsnodes 

2.  Stop Oracle Clusterware by running the following command as root on all of the nodes:
# crsctl stop crs 

3.  If the preceding command returns any error due to OCR corruption, stop Oracle Clusterware by running the following command as root on all of the nodes:
# crsctl stop crs -f 

4.  If you are restoring OCR to a cluster file system or network file system, then run the following command as root to restore OCR with an OCR backup that you can identify in "Listing Backup Files".
CD $ORACLE_HOME/cdata/rac-scan
# ocrconfig -restore file_name 

5.  After you complete this step, skip to step 8 next.
Start the Oracle Clusterware stack on one node in exclusive mode by running the following command as root:
# crsctl start crs -excl 

Ignore any errors that display.

6.  Check whether crsd is running. If it is, stop it by running the following command as root:
# crsctl stop resource ora.crsd -init 

Caution:
Do not use the -init flag with any other command.

7.  Restore OCR with an OCR backup that you can identify in "Listing Backup Files" by running the following command as root:
# ocrconfig -restore file_name 

Notes:
If the original OCR location does not exist, then you must create an empty (0 byte) OCR location before you run the ocrconfig -restore command.

8.  Ensure that the OCR devices that you specify in the OCR configuration exist and that these OCR devices are valid.
If you configured OCR in an Oracle ASM disk group, then ensure that the Oracle ASM disk group exists and is mounted.

Verify the integrity of OCR:
# ocrcheck 

9.  Stop Oracle Clusterware on the node where it is running in exclusive mode:
# crsctl stop crs -f 

10.  Begin to start Oracle Clusterware by running the following command as root on all of the nodes:
# crsctl start crs 

11.  Verify the OCR integrity of all of the cluster nodes that are configured as part of your cluster by running the following CVU command:
$ cluvfy comp ocr -n all -verbose

Restore voting disk in case of all voting disk are corrupted.
Voting disk will be automatically recovered using latest available backup of OCR.

Current scenario:
The only copy of the voting disk is  present in test diskgroup   on disk ASMDIsk010
We will corrupt ASMDIsk010 so that we lose the only copy of the voting disk.
We will restore voting disk to another diskgroup using the OCR.

Currently, we have 1 voting disk. Let us corrupt it and check if clusterware still continues
FIND OUT LOCATION OF VOTEDISK
[grid@host01 cssd]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
 1. ONLINE   00ce3c95c6534f44bfffa645a3430bc3 (ORCL:ASMDISK010) [TEST]

FIND OUT THE NO. OF DISKS IN test DG (CONTAINING VOTEDISK)
ASMCMD> lsdsk -G test
Path
ORCL:ASMDISK010

Let us corrupt ASMDISK010
— bs = blocksize = 4096
— count = # of blocks overwritten = 1000000 (~1M)
– total no. of bytes corrupted = 4096 * 1000000
                                 (~4096M = size of one partition)

#dd if=/dev/zero of=/dev/oracleasm/disks/ASMDISK010 bs=4096 count=1000000

Here, I was expecting clusterware to stop as the  only  voting disk was not  available but surprisingly clusterware kept running.
Finally, I stopped clusterware and tried to restart it. It was not able to restart.
Reboot all the nodes and note that cluster ware does not start as voting disk is not accessible.
#crsctl stat res -t

– Now since voting disk can’t be restored back to test diskgroup as disk in test has been corrupted,
   we will create another diskgroup votedg where we will restore voting disk.

RECOVER VOTING DISK
– To move voting disk to votedg diskgroup, ASM instance should be up and for ASM
   instance to be up, CRS should be up. Hence we will
  1.      stop crs on all the nodes
  2.      start crs in exclusive mode on one of the nodes (host01)
  3.      start asm instance on host01 using pfile (since spfile of ASM instance is on ASM)
  4.      create a new diskgroup votedg
  5.      move voting disk to votedg  diskgroup
  6.      stop crs on host01(was running in exclusive mode)
  7.      restart crs on host01
  8.      start crs on rest of the nodes
  9.      start cluster on all the nodes

– IMPLEMENTATION –

stop crs on all the nodes(if it does not stop, kill ohasd process and retry)
root@hostn# crsctl stop crs -f

start crs in exclusive mode on one of the nodes (host01)
root@host01# crsctl start crs -excl

start asm instance on host01 using pfile
root@host01# ps -ef | grep +ASM1
if any kill or shut it down
grid@host01$ sqlplus / as sysasm
sql> shut abort

grid@host01$ vi  /u01/app/oracle/init+ASM1.ora
INSTANCE_TYPE=ASM
asm_diskstring=/dev/oracleasm/disks/*

Grid@hoat01$chown grid:oinstall /u01/app/oracle/init+ASM1.ora

Grid@host01$ sqlplus / as sysasm
 SQL>startup pfile='/u01/app/oracle/init+ASM1.ora';

SQL>create a new diskgroup VDISK
create diskgroup VDISK normal redundancy
failgroup fg1 disk '/dev/oracleasm/disks/VOTE1'
failgroup fg1 disk '/dev/oracleasm/disks/VOTE2'
failgroup fg1 disk '/dev/oracleasm/disks/VOTE3'
attribute 'compatible.asm'='11.2';

Make sure the compatibility parameter is set to the version of Grid software you’re using. You can change it using the following command:
SQL>alter diskgroup VDISK set attribute ‘compatible.asm’=’11.2’;

Restore voting disk to data diskgroup
  Voting disk is automaticaly recovered using latest available backup of OCR.
root@host01#crsctl replace votedisk +votedg

root@host01# ocrcheck
root@host01# crsctl stat res -t

stop crs on host01(was running in exclusive mode)
root@host01#crsctl stop crs -f

restart crs on host01
root@host01#crsctl start crs

start crs on rest of the nodes (if it does not start, kill ohasd process and retry)
root@host02#crsctl start crs
root@host03#crsctl start crs

start cluster on all the nodes and check that it is running
root@host01#crsctl start cluster -all
            crsctl stat res -t

Recover voting disk in case we lose 2 out of 3 copies of voting disk.
In this case, voting disk will be recovered using surviving copy of voting disk.

Current scenario:
3 copies of voting disk are present in test diskgroup  on disks ASMDIsk010, ASMDIsk011, ASMDIsk012.
We will corrupt two disks ASMDIsk010, ASMDIsk011 so that ASMDISK012 still has a copy of the voting disk. We will restore voting disk to another diskgroup using the only valid copy we have.

Currently, we have 3 voting disks. AT least 2 should be accessible for the clusterware to work. Let us corrupt one of the voting disks and check if clusterware still continues

FIND OUT LOCATION OF VOTEDISK
[grid@host01 cssd]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
 1. ONLINE   00ce3c95c6534f44bfffa645a3430bc3 (ORCL:ASMDISK012) [TEST]
 2. ONLINE   a3751063aec14f8ebfe8fb89fccf45ff (ORCL:ASMDISK010) [TEST]
 3. ONLINE   0fce89ac35834f99bff7b04ccaaa8006 (ORCL:ASMDISK011) [TEST]
Located 3 voting disk(s).

FIND OUT THE NO. OF DISKS IN test DG (CONTAINING VOTEDISK)
ASMCMD> lsdsk -G test
Path
ORCL:ASMDISK010
ORCL:ASMDISK011
ORCL:ASMDISK012

-- Let us corrupt ASMDISK010
– bs = blocksize = 4096
– count = # of blocks overwritten = 1000000 (~1M)
– total no. of bytes corrupted = 4096 * 1000000
                                 (~4096M = size of one partition)

#dd if=/dev/zero of=/dev/soracleasm/disks/ASMDISK010 bs=4096 count=1000000

CHECK THAT C/W KEEPS RUNNING AS 2 VOTING DISKS (MORE THAN HALF OF
  VOTING DISKS) STILL AVAILABLE
#crsctl stat res -t

-- Now let us corrupt ASMDISK011
– bs = blocksize = 4096
– count = # of blocks overwritten = 1000000 (~1M)
– total no. of bytes corrupted = 4096 * 1000000
                                 (~4096M = size of one partition)
#dd if=/dev/zero of=/dev/oracleasm/disks/ASMDISK011 bs=4096 count=1000000

Here, I was expecting clusterware to stop as only 1 voting disk ( < half of total(3)) were available but surprisingly clusterware kept running. I event waited for quite some time but to no avail.  I would be glad if someone can give more input  on this.
Finally, I stopped clusterware and tried to restart it. It was not able to restart.

CHECK THAT C/W IS NOT RUNNING
#crsctl stat res -t

Now we have one copy of the voting disk on one of the disks in test diskgroup we can use that copy to get voting disk back. Since voting disk can’t be restored back to test diskgroup as disks in test have been corrupted, we will restore voting disk to data diskgroup .

RECOVER VOTING DISK –
– To move voting disk to data diskgroup, ASM instance should be up and for ASM instance to be up, CRS should be up. Hence we will
  1. stop crs on all the nodes
  2. start crs in exclusive mode on one of the nodes (host01)
  3. start asm instance on host01 using pfile (since spfile of ASM instance is on ASM)
  4. move voting disk to data diskgroup.
  5. drop test diskgroup (it will allow as it does not have voting disk any more)
  6. stop crs on host01(was running in exclusive mode)
  7. restart crs on host01
  8. start crs on rest of the nodes
  9. start cluster on all the nodes

-- IMPLEMENTATION –

stop crs on all the nodes(if it does not stop, kill ohasd process and retry)
root@hostn# crsctl stop crs -f

start crs in exclusive mode on one of the nodes (host01)
root@host01# crsctl start crs -excl

start asm instance on host01 using pfile
root@host01# ps -ef | grep +ASM1
if any kill or shut it down
grid@host01$ sqlplus / as sysasm
sql> shut abort

grid@host01$ vi  /u01/app/oracle/init+ASM1.ora
INSTANCE_TYPE=ASM
asm_diskstring=/dev/oracleasm/disks/*


Grid@host01$chown grid:oinstall /u01/app/oracle/init+ASM1.ora

Grid@host01$ sqlplus / as sysasm
 SQL>startup pfile='/u01/app/oracle/init+ASM1.ora';

-- create a new diskgroup data

Make sure the compatibility parameter is set to the version of Grid software you’re using. You can change it using the following command:
SQL>alter diskgroup data set attribute ‘compatible.asm’=’11.2’;

Check that data diskgroup is mounted on host01. if not, mount it.
ASMCMD> lsdg
ASMCMD> mount data

move voting disk to data diskgroup.  voting disk will be automatically recovered using surviving copy of voting disk.
root@host01#crsctl replace votedisk +data

root@host01# ocrcheck
root@host01# crsctl stat res -t

drop test diskgroup (it will allow as it does not have voting disk)
SQL>drop diskgroup test force including contents;

stop crs on host01(was running in exclusive mode)
root@host01#crsctl stop crs -f

restart crs on host01
root@host01#crsctl start crs

start crs on rest of the nodes (if it does not start, kill ohasd process and retry)
root@host02#crsctl start crs
root@host03#crsctl start crs

start cluster on all the nodes and check that it is running
root@host01#crsctl start cluster -all                                       
root@host01#crsctl stat res -t

Tuesday 12 July 2016

Patching and Upgrades, RAC One Node, and Clusterware


Patching and Upgrades, RAC One Node, and Clusterware


For Release 2 only: Learn how to enable a single name for the cluster, enable High Availability for a single instance database, place OCR and voting disks on ASM, and explore some miscellaneous HA-related improvements.
See Series TOC 

In Oracle Database 11g Release 2, there are three major changes in the installation process. First, Oracle Universal Installer’s familiar screens have been replaced by a new interface.
Oracle Installer
Note the list on the left hand side that shows the steps. The right side shows the details of the setup. This is a major shift from earlier releases that showed only a progress bar. The left side not only shows you which sections have been completed but also allows you to go back to a specific section at will to correct something or make a modification.
The second important difference you will notice is in the area of database patchsets. Traditionally, Oracle Database patchsets were just for bugfixes; at best, only minor new functionality was introduced. With this release of Oracle, that tradition is broken: The first patchset of Oracle Database 11g Release 2, 11.2.0.2, introduced some significant new functionality.

Finally, the third change also involves patchsets. Traditionally, the patchsets (e.g. Oracle 10.2.0.4 or 9.2.0.8) were released as patches; you had to apply the patches, typically a file named like p8202632_10205_LINUX.zip, to an existing installation. With this release, however, patchsets are now standalone: for example, the 11.2.0.2 patchset is a completely new installation -- it contains seven files, approximately 4GB in all, and they are all you need to create a 11.2 database.
This begs a logical question: what will be case when you upgrade from 11.2.0.1 to 11.2.0.2? You would install the 11.2.0.2 software in a completely different home, shutdown the main database, change ORACLE_HOME variable to the new Oracle Home, and start the upgrade process there.
This approach is very useful when patching Clusterware. You would install the Clusterware in a different Oracle Home when the old one is still up and running. Then you would just shut down the cluster services on the old home and bring it up on the new home. In a two-node RAC cluster, this can allow you to do a zero or near-zero downtime upgrade.

Upgrade Options

Speaking of upgrades, you can upgrade to Oracle Database 11g Release 2 from the following releases only. The source database must be at least in the following patchset levels.
  • If in Oracle 9i Release 2, you should be at patchset 9.2.0.8 or higher
  • If in Oracle Database 10g Release 1, you should be at patchset 10.1.0.5 or higher
  • If in Oracle Database 10g Release 2, you should be at patchset 10.2.0.2 or higher
  • If in Oracle Database 11g Release 1, you should be at patchset 11.1.0.6 or higher
If you are in an older database version or patchset, you should first upgrade to bring it up to the level desired before upgrading. For instance, if your database is at 10.1.0.4, you have to apply the patch 10.1.0.5 before attempting 11.2 upgrade. If you have an older version, e.g. Oracle8i, then you have perform a two-step upgrade: upgrade to Oracle 10.2.0.4 and then to 11.2.

Upgrade Tips

There are several issues that you should be aware of before upgrading to make your upgrade experience better. They are described clearly in the Upgrade Guide. Here are some tips. (Please note, this is by no means a comprehensive one. I just described the ones that stand out.)

Dictionary Statistics

Oracle Database 11.2 requires optimizer statistics on dictionary tables. If the source database didn’t have such stats, which would most likely be the case, the stats will be collected during the upgrade. Needless to say it may make the upgrade process extremely time lengthy. Since the database will be down for that duration, it may be highly undesirable.
To reduce that time you should compute statistics on dictionary objects prior to the upgrade. As SYS, issue the following SQL:
begin
     dbms_stats.gather_dictionary_stats;
   end;

The above does not require a downtime. In Oracle9i, the above procedure does not exist. So, you should execute dbms_stats.gather_schema_stats instead.

Invalid Objects

This tip applies to any upgrade, not necessarily for 11.2 alone. During the upgrade, the invalid objects will be compiled.  Depending on the number of such objects, the time needed could be lengthy. Therefore it will be much better to compile all invalid objects prior to the upgrade. For the objects that can’t be compiled for dependency failures, you should drop them. You can take a backup of those objects using Data Pump Export.

CONNECT Role

Remember the role CONNECT? In the earlier releases the role had privileges in addition to just connection. It also had create table, create synonym, etc. In Oracle Database 11.2, this role has only one privilege: CREATE SESSION. Therefore the users who have only this role might not be able to do due to the missing privileges, e.g. if the user needed to create a table, now it will not be.
Therefore, you should identify the users with connect role and grant them the needed privileges. It can be done after the upgrade but preferably be done prior to the upgrade as well.

Raw Device Support

This is not an issue but I think I should mention it here to clear some confusion. Oracle Database 11.2 completely desupports raw devices. Read: desupported; not just deprecated. So does that mean you have to convert the datafiles on raw to either filesystem or ASM prior to conversion?
No; not at all. The desupport of raw devices in 11.2 means you will not be able to create new databases on raw; you must use ASM or filesystem. However when you upgrade an existing database on raw, it will be possible to continue on the raw devices. You will also be able to create new datafiles on the raw devices as well.
In the next major release, Oracle may completely desupport raw for upgrades as well as new databases. So, you should move away from raw as soon as possible.

Grid Infrastructure Home

One of the most important changes in this release of Oracle is the decoupling of ASM and RDBMS codes, as you learned in the installment on ASM. To recap, in the earlier releases of Oracle Database, ASM was not separate software but a part of the RDBMS code itself. You didn’t have to install ASM separately (but as a best practice, you would have created a separate home for ASM). In this release, ASM is no longer part of the RDBMS code. It is packaged into a new component called Grid Infrastructure (GI), which includes Clusterware as well. Even if you don’t use RAC (and therefore no cluster), you have to install GI if you plan to use ASM.
So, you need to perform two installations in two different homes: GI Home and RDBMS Home.

On the downloads page, note the different software downloads available:
  • Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux x86
  • Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86

You should download both of these items (if you plan on using ASM and even if you do not plan to use RAC). First, you need to install the Grid Infrastructure, which configures ASM and starts it. Next you install the Database software for the RDBMS, which will use the ASM diskgroups created earlier.

Providing ASM services is not the only function of Grid Infrastructure. The next section shows another important use.
For better security, you should use different userids for Database and GI. For instance, you can use oragrid for GI and oradb for the Database software. Doing so allows you to give control of the Grid Infrastructure to a group different from the DBAs, such as System Admins. You don’t have to separate the duties like that but setting up under different userids like that allows you to do so if desired in the future.

Oracle Restart

What happens when the PMON process of Oracle is killed? The instance dies, of course. The same effect applies to the death of any other critical process – SMON, LGWR and so on. Sometimes the issues are transient in nature and may be resolved by themselves. In such cases, if you restart the instance, it will come up well; but you must handle the process of detecting the failure and starting the process yourself.
Previously, in the case of Oracle RAC, the CRS took care of the detection and restarts. If you didn’t use RAC, then this was not an option for you. However, in this version of Oracle, you do have that ability even if you do not use RAC. The functionality – known as Oracle Restart – is available in Grid Infrastructure. An agent checks the availability of important components such as database, listener, ASM, etc. and brings them up automatically if they are down. The functionality is available out of the box and does not need additional programming beyond basic configuration. The component that checks the availability and restarts the failed components is called HAS (High Availability Service).
Here is how you check the availability of HAS itself (from the Grid Infrastructure home):
$ crsctl check has
CRS-4638: Oracle High Availability Services is online

The service has to be online to be effective. You can check the versions by the following commands:
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.1.0]
$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.1.0]

The service can be set to start automatically. To check if it has been configured that way:
$ crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.

If it is not set to autostart, it can be made so by:
$ crsctl enable has
CRS-4622: Oracle High Availability Services autostart is enabled.

Replacing "enable" by "disable" will remove the autostart property of HAS. If not started, it can be started by:
$ crsctl start has

Similarly "stop" clause stops the HAS. The Grid Infrastructure installation places an executable called ohasd in the autostart directory of the server, e.g. in /etc/init.d in case of Linux. This executable provides the HAS service, i.e. it is the HAS daemon. In Linux, you can confirm it by checking for the processes:
$ ps -aef|grep has
root      5185     1  0 Aug10 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
oracle    5297     1  0 Aug10 ?        00:05:24 /opt/oracle/product/11.2.0/grid/bin/ohasd.bin reboot

Once HAS is in place, you can register the components that should be monitored by HAS and be brought up when needed. These are done by a command srvctl under ORACLE_HOME/bin of Grid Infrastructure. You may be familiar with the command; it is used in RAC. In Oracle Database 11.2, it is used in single instance as well.
The first component you may want to add is the Oracle Database itself. Here is how you register the database:
$ srvctl add database -d D112D2 -o /opt/oracle/product/11.2.0/db1 -y AUTOMATIC -r PRIMARY -s open 

The above command registers a database named D112D2 running from the Oracle Home /opt/oracle/product/11.2.0/db1. The "-y" option tells HAS to restart the database automatically in case of failure. The "-r" option mentions the role, which is Primary in this case (assuming a Data Guard configuration). The "-s" options tells HAS to bring the database all the way to open state when restarting.
Once configured, you can start the database using the command shown below:
$ srvctl start database -d D112D2

The command starts the components in proper order, i.e. ASM instance must start and diskgroups be mounted before the database can start. It occurs only with SRVCTL. If you start the database with SQL*Plus, these dependencies are not checked.
To check the status of the database:
$ srvctl status database -d D112D2
Database is running.

An option "-v" shows the services running in the database:
$ srvctl status database -d D112D2 -v
Database D112D2 is running with online services APP

To get details on the database you should use the "config" option
$ srvctl config database -d D112D2 -a
Database unique name: D112D2
Database name: D112D2
Oracle home: /opt/oracle/product/11.2.0/db1
Oracle user: oracle
Spfile: +DATA/D112D2/spfileD112D2.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Services: APP
Database is enabled

Similarly, you can configure the listener by the command shown below:
$ srvctl config listener -l LISTENER1
Name: LISTENER1
Home: /opt/oracle/product/11.2.0/grid
End points: TCP:1522

Once configured, the listener is created; Check the status of the listener:
$ srvctl status listener -l LISTENER1
Listener LISTENER1 is enabled
Listener LISTENER1 is not running

The listener is not running. It’s merely created; not started. To start:
$ srvctl start listener -l LISTENER1

You can check the configuration of ASM in the same manner.
$ srvctl config asm -a
ASM home: /opt/oracle/product/11.2.0/grid
ASM listener: LISTENER
Spfile: +DATA/asm/asmparameterfile/registry.253.720043719
ASM diskgroup discovery string: /dev/sdb*
ASM is enabled.

To check the status of ASM:
$ srvctl status asm -a
ASM is running on oradba2
ASM is enabled.

The configuration can be done for Diskgroups and Service as well. Here is how we can create a service called APP (remember, we are doing this in a single instance, not RAC where these commands have been long used).
$ srvctl add service -d D112D2 -s APP

After the creation, you can check the various properties of the service:
$ srvctl config service -d D112D2 -a
Service name: APP
Service is enabled
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE

The service has been created; but not started yet. You can check the status of the service and start it:
$ srvctl status service -d D112D2 -s APP
Service APP is not running.
$ srvctl start service -d D112D2 -s APP
$ srvctl status service -d D112D2 -s APP
Service APP is running

You can also use the same command for diskgroups in ASM, which are also considered resources, as well.
$ srvctl start diskgroup -g <dg_name>
$ srvctl stop diskgroup -g <dg_name> [-f] 

(The "-f" option is to force stop the diskgroup, even if some database is using that diskgroup and is currently up)
$ srvctl status diskgroup -g <dg_name> [-a] 

(The "-a" option is for showing all status)
$ srvctl enable diskgroup -g <dg_name>
$ srvctl disable diskgroup -g <dg_name>

To modify the configuration you use the "modify" clause:
$ srvctl modify database -d D112D2 –z

In this example we have configured to remove the dependence of disk groups on databases (the –z option does it)
To remove the database from Oracle Restart, you can use
$ srvctl remove database -d D112D2 -v 
Remove the database D112D2? (y/[n]) y
Successfully removed database and its dependent services.

This merely removes the database from HAS services. The database itself is not dropped.
You can get a lot of information on the resources managed by HAS when they are running. Here is the command to do that. The "-v" option, which stands for "verbose" output, makes it happen.
$ crsctl status resource -v
NAME=ora.DATA.dg
TYPE=ora.diskgroup.type
LAST_SERVER=oradba2
STATE=ONLINE on oradba2
TARGET=ONLINE
CARDINALITY_ID=ONLINE
CREATION_SEED=6
CURRENT_RCOUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.DATA.dg oradba2 1
INCARNATION=1
LAST_RESTART=1281465040
STATE_DETAILS=
… output truncated …
NAME=ora.diskmon
TYPE=ora.diskmon.type
LAST_SERVER=oradba2
STATE=ONLINE on oradba2
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=1
CURRENT_RCOUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.diskmon 1 1
INCARNATION=1
LAST_RESTART=1281462735
STATE_DETAILS=


While there is some detailed information, it’s a little difficult to read. You can get a much more succinct report by the following command: 

$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME              TARGET  STATE        SERVER        STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg       ONLINE  ONLINE       oradba2                                      
ora.DG1.dg        ONLINE  ONLINE       oradba2                                      
ora.DG2.dg        OFFLINE OFFLINE      oradba2                                      
ora.LISTENER.lsnr ONLINE  ONLINE       oradba2                                      
ora.asm           ONLINE  ONLINE       oradba2       Started             
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd 1        ONLINE  ONLINE       oradba2                                      
ora.d112d2.db     ONLINE  ONLINE       oradba2       Open                
ora.diskmon       ONLINE  ONLINE       oradba2             

What if you want to get a detailed output on a specific resource only? You can do that by naming the resource as a parameter. (Please note: you have to give the complete resource name as it appears in the crsctl output. For instance, the resource APP is not the full name. The full name is ora.d112d2.app.svc.)
$ crsctl status resource ora.d112d2.app.svc
NAME=ora.d112d2.app.svc
TYPE=ora.service.type
TARGET=ONLINE
STATE=ONLINE on oradba2

The "-v" option makes the output more verbose.
$ crsctl status resource ora.d112d2.app.svc -v
NAME=ora.d112d2.app.svc
TYPE=ora.service.type
LAST_SERVER=oradba2
STATE=ONLINE on oradba2
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=17
CURRENT_RCOUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.d112d2.app.svc 1 1
INCARNATION=1
LAST_RESTART=1293305825
STATE_DETAILS=

You can start all the resources by the following command
$ crsctl start resource -all
CRS-5702: Resource 'ora.DG1.dg' is already running on 'oradba2'
CRS-5702: Resource 'ora.LISTENER.lsnr' is already running on 'oradba2'
CRS-5702: Resource 'ora.asm' is already running on 'oradba2'
CRS-5702: Resource 'ora.cssd' is already running on 'oradba2'
CRS-5702: Resource 'ora.d112d2.app.svc' is already running on 'oradba2'
CRS-5702: Resource 'ora.d112d2.db' is already running on 'oradba2'
CRS-5702: Resource 'ora.diskmon' is already running on 'oradba2'
CRS-2672: Attempting to start 'ora.DG2.dg' on 'oradba2'
CRS-2676: Start of 'ora.DG2.dg' on 'oradba2' succeeded
CRS-4000: Command Start failed, or completed with errors.

This is a useful command to start all the resource at once but if one command fails, it shows the "completed with errors" output at the end. If the autostart it enabled for all the components, this is not really necessary. In some cases, particularly when you are trying to diagnose issues in the environment, you may want to disable autostart and start them manually.
To stop all resources running from a specific Oracle Home, you can issue a single command:
$ srvctl stop home -o /opt/oracle/product/11.2.0/db1 -s mystate.txt

The file mystate.txt is called a "state file" which records the state of the HAS components as they were at the time of the shutdown. Here is how the file looks like:
$ cat mystate.txt
svc-APP
db-d112d2

Similarly to start all resources started from a specific Oracle Home, you can issue a single command:
$ srvctl start home -o /opt/oracle/product/11.2.0/db1 -s mystate.txt

The state file is needed to start the resources running at the time of the shutdown.

RAC One Node

Are you familiar with the concept of Active/Passive Clusters such as Veritas Storage Foundation, HP ServiceGuard and IBM HACMP? If not, in summary, they watch for failures on the "active" server where the database runs. When that server fails, the cluster mounts the disks on the passive server and starts the database instance. It also moves the IP address (known as Virtual IP) to the passive server, so the clients do not actually have to change anything. The cluster experiences a brief hiccup but continues performing. (Despite the name, these technologies are not "clusters" strictly speaking, at least not as Oracle defines them.)
RAC One Node is Oracle’s answer to this category. In this configuration, the Oracle Clusterware creates a cluster that makes storage visible across multiple nodes via ASM. The concept can be extended to create a special type of RAC database that has only one instance running on one node. If the node fails, another instance can be brought up on a different node in the cluster to continue processing. In some ways, RAC One Node is a full RAC database but since only one instance is up, there is no cross-instance communication and therefore you may not suffer from the resultant performance issues.
But doesn't that mean that the other "node", being passive, is a wasted resource? Sure it is – for that database. But you can create the instance for a different database on that passive server, which makes it the active server for that database. By using a number of databases in that manner, you can create a semblance of load balancing while making all of them more available. The group of servers used for the active/passive nodes for the databases is known as a "server pool".
Oracle Clusterware has specific configurations for creating RAC One Node databases. Here is a example to create a database known as RAC1.
$ srvctl add database -c RACONENODE -e "dantstpdb01,dantstpdb02" -i RAC12 -d RAC1 -o /Proltst/oradb/db1

The "-c" option specifies the database type, which in this case is a RAC One Node. The server pool comprises two servers - dantstpdb01,dantstpdb02 – specified by "-e" option. The instance name is denoted by the "-I" option and the Oracle Home is specified by "-o".
When you want to move the database to a different node, you would issue:
$ srvctl relocate database -d RAC1 -n dantstpdb01

If you want to convert a regular database to RAC One Node database, you should first convert it to a RAC database on only one node. Then use the convert command:
$ srvctl convert database -d RAC1 -c RACONENODE

If you change your mind, you can convert it back to a regular RAC database on only one node:
$ srvctl convert database -d RAC1 -c RAC

If you are scratching your head about the utility of RAC One Node, and wondering why regular Oracle RAC wouldn't provide the same benefits, you should know that there are some differences between them.
  • RAC One Node has a lower price point
  • RAC One Node does not have the full complexity of RAC, and can be supported by less experienced DBAs
  • RAC One Node has its own set of commands, which may insulate some of the nuances of RAC administration from DBAs
  • Being on one node alone, it does not incur the performance overhead of high-frequency internode communication
However, RAC One Node is hardly a panacea. It is definitely not a DR solution, for which Data Guard is a much better alternative. Since only one node of the cluster is used, load balancing will not be possible for the same database.

SCAN  Listeners

Before starting on this feature, let me touch upon a potential issue with RAC configuration. Suppose you have two nodes in your RAC cluster, with VIP host names as node1-vip and node2-vip. The clients will need to load balance between the two nodes. To do so, you may have created a TNSNAMES.ORA file entry as shown below:
APP=
  (DESCRIPTION=
    (FAILOVER=on)
    (LOAD_BALANCE=on)
    (ADDRESS_LIST=
      (ADDRESS=
        (PROTOCOL=TCP)
        (HOST=node1-vip)
        (PORT=1521)
      )
      (ADDRESS=
        (PROTOCOL=TCP)
        (HOST=node2-vip)
        (PORT=1521)
      )
    )
    (CONNECT_DATA=
      (FAILOVER_MODE=
        (TYPE=select)
        (METHOD=basic)
        (RETRIES=120)
      )
      (SERVER=dedicated)
      (SERVICE_NAME=APP)
    )
  )

Suppose you decide to add in a third node into the cluster, node3-vip. Even if you define the service APP on that third node, the client will not be able to connect to the node simply because the node3-vip does not actually get mentioned in the TNS entry. What are your options at this stage?
The only option is to update the TNS entry at the client to add in the third node information. In the case of a handful of clients this may not be a huge deal but consider a scenario where you have thousands of clients.
The solution might be to introduce Oracle Internet Directory or something similar. Some organizations go to the extent of storing the TNSNAMES.ORA at a single location on an NFS mountpoint. All these options are difficult, if not impossible to execute, especially in large organizations.
To address this issue, 11g Release 2 has a concept called Single Client Access Name (SCAN). It’s just one host name, which is, say, app-scan. The listeners in the cluster nodes listen to this hostname. This hostname is mapped to three IP addresses (could be one also; but preferably to three to avoid a single point of failure). After you define it in your DNS, the name resolves to three IP addresses as shown below:
C:\>nslookup dantstp-scan
Server:  stcadcpdc04.proligence.com
Address:  10.20.223.108
Name:    dantstp-scan.proligence.com
Addresses:  192.168.76.62, 192.168.76.63, 192.168.76.64

When you ping the name, it should resolve to one of the three IPs in a round robin manner. Each node then runs a special type of listener called SCAN Listener. Here are the processes:
# ps -aef|grep lsnr
oragrid  9437224        1   0   Dec 23      -  0:04 /Proltst/oragrid/grid1/bin/tnslsnr LISTENER_SCAN1 -inherit 
oragrid 32374934 55902452   0 21:15:04  pts/1  0:00 grep lsnr 
oragrid 64356462        1   0   Dec 23      -  0:05 /Proltst/oragrid/grid1/bin/tnslsnr LISTENER –inherit

To check the listener status, you can use the well known lsnrctl utility:


# lsnrctl status LISTENER_SCAN1

LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production on 26-DEC-2010 21:15:42

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
Start Date                23-DEC-2010 14:37:10
Uptime                    3 days 6 hr. 38 min. 31 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      ON
Listener Parameter File   /Proltst/oragrid/grid1/network/admin/listener.ora
Listener Log File         /Proltst/oragrid/grid1/log/diag/tnslsnr/dantstpdb01/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.76.62)(PORT=1521)))
Services Summary...
Service "PROLADMIN" has 1 instance(s).
  Instance "PROLTST11", status READY, has 1 handler(s) for this service...
Service "PROLTST1" has 1 instance(s).
  Instance "PROLTST11", status READY, has 1 handler(s) for this service...
Service "PROLTST1XDB" has 1 instance(s).
  Instance "PROLTST11", status READY, has 1 handler(s) for this service...
Service "Proltst_t1_preconnect" has 1 instance(s).
  Instance "PROLTST11", status READY, has 1 handler(s) for this service...
Service "Proltst_t2_preconnect" has 1 instance(s).
  Instance "PROLTST11", status READY, has 1 handler(s) for this service...
The command completed successfully

With this in place, check for the SCAN Listener:
# srvctl config scan   
SCAN name: dantstp-scan, Network: 1/192.168.76.0/255.255.255.0/en0
SCAN VIP name: scan1, IP: /dantstp-scan/192.168.76.62
SCAN VIP name: scan2, IP: /dantstp-scan/192.168.76.63
SCAN VIP name: scan3, IP: /dantstp-scan/192.168.76.64 

The three SCANs run on both the nodes – two one one and the third on the other. You can confirm that:
dantstpdb01.oragrid:/home/oragrid # srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node dantstpdb01
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node dantstpdb02
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node dantstpdb02
After that the TNSNAMES.ORA will look like this:
APP=
  (DESCRIPTION=
    (FAILOVER=on)
    (LOAD_BALANCE=on)
    (ADDRESS_LIST=
      (ADDRESS=
        (PROTOCOL=TCP)
        (HOST=dantstp-scan)
        (PORT=1521)
      )
    )
    (CONNECT_DATA=
      (FAILOVER_MODE=
        (TYPE=select)
        (METHOD=basic)
        (RETRIES=120)
      )
      (SERVER=dedicated)
      (SERVICE_NAME=APP)
    )
  )

Note there is only one hostname (shown in bold); not two as before. The hostname is for the SCAN name. When clients call it, it resolves, in round-robin format to one of the three IP addresses as defined in the DNS for that entry. The corresponding listener receives the request and forwards to one of the database instances. When you add a new node to the cluster, there is no need to change the TNSNAMES.ORA file since the SCAN will not change. One of the three listeners will pick it up and direct it to the least loaded node. Similarly you can remove a node from the cluster without making a single change to the TNSNAMES.ORA file. SCAN allows you to do that.
When you install the Grid Infrastructure, the SCAN is configured then. If you haven’t done that, don’t despair. You can add it later using this command:
$ srvctl add scan -n dantstpdb01 

To add the SCAN listener, you can use
$ srvctl add scan_listener -l PROL –s –p 1521

The "-s" option tells it not to restrict any ports. The "-l" option is the prefix. The prefix is applied to SCAN listener name.

OCR and Voting on ASM

In the prior releases of Oracle, the Oracle Cluster Repository (OCR) and Voting Disks were on raw devices. Since the raw devices have been deprecated, the choice now is between a cluster filesystem or an ASM diskgroup. The OCR and voting must be on a shared device so a local filesystem is not going to work. Clustered filesystems may not be an option due to high cost. Other "free" options may include network filesystems but they are usually slow and unreliable. So, ASM remains the best choice. The OCR and voting disks could be on any available ASM diskgroup; not ones exclusively created for them.
But remember, in a clustered system, the cluster must come up before the ASM instance can start in a clustered mode on all instances. For the cluster to come up, it needs the OCR and voting disks. It’s a classic chicken and egg riddle – which one should first?
In Oracle Database 11g Release 2, the solution lies in making a special ASM call. When the OCR and Voting disks are placed in a diskgroup, the Clusterware places them in a special fixed location on the underlying disks. When the cluster starts, it checks that location to read the cluster repository and voting files. There is no reason to bring up the entire ASM stack to read the files. The OCR is also replicated across all the underlying disks of the diskgroup; so failure of a diskgroup does not bring the failure of the diskgroup.
Let’s see how it works. First check the voting devices. The following command shows the voting disks.
# crsctl query css votedisk
##  STATE    File Universal Id                File Name            Disk group
--   -----      -----------------                          ---------                ---------
1.  ONLINE   50afef59821b4f5ebfd5b7bd17321d8b (/dev/emcdiskpath21) [PROLTSTCRS1]
2.  ONLINE   59216f51595d4fa9bf3aa71c502843b9 (/dev/emcdiskpath22) [PROLTSTCRS1]
3.  ONLINE   60356b15ab2b4fd7bfad1b5c79f1444c (/dev/emcdiskpath23) [PROLTSTCRS1]
4.  ONLINE   8fbd5815460b4fc4bf23fd96e01c3a72 (/dev/emcdiskpath24) [PROLTSTCRS1]
5.  ONLINE   95392036391f4fb7bf90c8cfa09b203e (/dev/emcdiskpath25) [PROLTSTCRS1]
Located 5 voting disk(s).

Notice that there are five voting disks, all in the diskgroup PROLTSTCRS1. The reason for five votedisks is that there are five disks in the diskgroup. Each disk gets a voting disk. In case of a failure, the nodes try to grab the voting disks and the node with the highest number of disks wins and becomes the master.
Checking for the OCR files:
# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3444
         Available space (kbytes) :     258676
         ID                       :  984202808
         Device/File Name         : +PROLTSTCRS1
                                    Device/File integrity check succeeded
         Device/File Name         : +PROLTSTCRS2
                                    Device/File integrity check succeeded

                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user

Note the OCR file is on the diskgroup ISAQACRS1. Its mirror is in ISAQACRS2 diskgroup. Oracle writes the OCR and voting devices to the underlying disks at pre-designated locations so it is not difficult to get the contents of these files when the cluster starts up.
Let’s see that with an actual example. Let’s see the logs from CSS and CRS. They are located at $ORACLE_HOME/log/<Hostname>/cssd and ORACLE_HOME/log/<Hostname>/crsd respectively. Here is an excerpt from one of the logs. The line says that it found a "potential" voting file on one of the disks - /dev/emcdiskpath25.
2010-12-19 17:46:36.517: [    CSSD][1286]clssnmvDiskVerify: discovered a potential voting file
2010-12-19 17:46:36.519: [   SKGFD][1286]Handle 1115de4f0 from lib :UFS:: for disk :/dev/emcdiskpath25:
2010-12-19 17:46:36.519: [    CSSD][1286]clssnmvDiskVerify: Successful discovery for disk /dev/emcdiskpath25,
                                 UID 95392036-391f4fb7-bf90c8cf-a09b203e, Pending CIN 0:0:1, Committed CIN 0:0:1
2010-12-19 17:46:36.520: [   SKGFD][1286]Lib :UFS:: closing handle 1115de4f0 for disk :/dev/emcdiskpath25:
2010-12-19 17:46:36.520: [    CSSD][1286]clssnmvDiskVerify: discovered a potential voting file
2010-12-19 17:46:36.522: [   SKGFD][1286]Handle 1115de4f0 from lib :UFS:: for disk :/dev/emcdiskpath24:
2010-12-19 17:46:36.524: [    CSSD][1286]clssnmvDiskVerify: Successful discovery for disk /dev/emcdiskpath24,
                                UID 8fbd5815-460b4fc4-bf23fd96-e01c3a72, Pending CIN 0:0:1, Committed CIN 0:0:1
2010-12-19 17:46:36.525: [   SKGFD][1286]Lib :UFS:: closing handle 1115de4f0 for disk :/dev/emcdiskpath24:
2010-12-19 17:46:36.525: [    CSSD][1286]clssnmvDiskVerify: discovered a potential voting file
2010-12-19 17:46:36.526: [   SKGFD][1286]Handle 1115de4f0 from lib :UFS:: for disk :/dev/emcdiskpath23:
2010-12-19 17:46:36.528: [    CSSD][1286]clssnmvDiskVerify: Successful discovery for disk /dev/emcdiskpath23,
                                 UID 60356b15-ab2b4fd7-bfad1b5c-79f1444c, Pending CIN 0:0:1, Committed CIN 0:0:1
2010-12-19 17:46:36.529: [   SKGFD][1286]Lib :UFS:: closing handle 1115de4f0 for disk :/dev/emcdiskpath23:
2010-12-19 17:46:36.529: [    CSSD][1286]clssnmvDiskVerify: discovered a potential voting file
2010-12-19 17:46:36.530: [   SKGFD][1286]Handle 1115de4f0 from lib :UFS:: for disk :/dev/emcdiskpath22:
2010-12-19 17:46:36.532: [    CSSD][1286]clssnmvDiskVerify: Successful discovery for disk /dev/emcdiskpath22, 
                                UID 59216f51-595d4fa9-bf3aa71c-502843b9, Pending CIN 0:0:1, Committed CIN 0:0:1
2010-12-19 17:46:36.533: [   SKGFD][1286]Lib :UFS:: closing handle 1115de4f0 for disk :/dev/emcdiskpath22:
2010-12-19 17:46:36.533: [    CSSD][1286]clssnmvDiskVerify: discovered a potential voting file
2010-12-19 17:46:36.534: [   SKGFD][1286]Handle 1115de4f0 from lib :UFS:: for disk :/dev/emcdiskpath21:
2010-12-19 17:46:36.536: [    CSSD][1286]clssnmvDiskVerify: Successful discovery for disk /dev/emcdiskpath21, 
                                 UID 50afef59-821b4f5e-bfd5b7bd-17321d8b, Pending CIN 0:0:1, Committed CIN 0:0:1
2010-12-19 17:46:36.537: [   SKGFD][1286]Lib :UFS:: closing handle 1115de4f0 for disk :/dev/emcdiskpath21:
2010-12-19 17:46:36.537: [    CSSD][1286]clssnmvDiskVerify: Successful discovery of 5 disks
2010-12-19 17:46:36.537: [    CSSD][1286]clssnmCompleteRmtDiscoveryReq: Completing voting file discovery requested by node dantstpdb01, number 1
2010-12-19 17:46:36.537: [    CSSD][1286]clssnmSendDiscoverAck: Discovery complete, notifying requestor node dantstpdb01

Note that the voting files were properly discovered and enumerated with labels, known as File Universal ID. For instrance FUID of voting file 5 is 95392036-391f4fb7-bf90c8cf-a09b203e. If you notice carefully, that is what the crsctl query css votedisk command also displays.

The placement of voting disks on ASM diskgroup is not cast in stone. To change the location of the voting disks from one diskgroup to another, use the following command:
# crsctl replace votedisk +NEWVOTEDISKDG

Another option is to add new voting files and delete the old ones. First get the FUID:
# crsctl query css votedisk
##  STATE    File Universal Id                File Name            Disk group
--   -----      -----------------                        ---------                ---------
1.  ONLINE   50afef59821b4f5ebfd5b7bd17321d8b (/dev/emcdiskpath21) [PROLTSTCRS1]
2.  ONLINE   59216f51595d4fa9bf3aa71c502843b9 (/dev/emcdiskpath22) [PROLTSTCRS1]
3.  ONLINE   60356b15ab2b4fd7bfad1b5c79f1444c (/dev/emcdiskpath23) [PROLTSTCRS1]
4.  ONLINE   8fbd5815460b4fc4bf23fd96e01c3a72 (/dev/emcdiskpath24) [PROLTSTCRS1]
5.  ONLINE   95392036391f4fb7bf90c8cfa09b203e (/dev/emcdiskpath25) [PROLTSTCRS1]
Located 5 voting disk(s).

Delete a voting file using its FUID:
$ crsctl delete css votedisk 95392036391f4fb7bf90c8cfa09b203e

Add a new voting file on a different diskgroup.
$ crsctl add css votedisk +NEWVOTEDISKDG

The same technique can be applied to add a new OCR file:
# ocrconfig -add +PROLTSTDATA1

And you can remove the old one.
# ocrconfig -delete +PROLTSTCRS1

This also works when you upgrade a clusterware from pre-11.2 to 11.2 and the OCR and voting devices were on raw devices. After the upgrade, you can migrate them to ASM using the above mentioned commands.
Incidentally, did you notice that there are places for five copies of the OCR file? In this release you can create up to five copies on five different diskgroups to make them redundant. This is a shift from the earlier releases of only two copies (one primary and one mirror).

DST Patch

Daylight Savings Time patches are mandatory to apply but seldom pleasant. In this release of Oracle, the DST patches are completely online; instance recycle is not required.

Multiple Interconnect

This is a exciting reinstatement of a very popular feature: If you worked in Oracle Parallel Server, there used to be a facility (since de-supported) to specify more than one interface for the cluster interconnect. In the prior versions of Clusterware, you could not specify more than one interface for the private interconnect. If you actually had multiple physical interfaces, such as eth1, eth2 and so on, your only options were to use only one of them, or bind them to create one interface.
The first option is not a good one; it creates a single point of failure and limits the bandwidth. Option 2 is always preferred but it requires the use of OS-specific tools to provide that binding such as APA in HPUX, Truncing in Solaris, MultiNIC in Linux and so on.
In Oracle Database 11g Release 2 it is possible to specify multiple NICs. However, the Clusterware will only use up to four interfaces; not all. It will load balance across all four. It still makes sense to define more than four interfaces. When one of the interfaces fail, the Clusterware will move to one of the available (and not yet used) interface.

Cluster Time Service

Timekeeping, it may seem, is the least of worries in a Clusterware. It definitely is not. A cluster is composed of multiple machines each with its own timekeeping mechanism. It is possible that the clock runs faster or slower (or altogether absent) on one server. This discrepancy in time may lead to all types of issues – starting with the relatively benign difficulty in syncing up cluster logs during troubleshooting, to the very damaging issue of cluster node eviction.
To avoid this issue, most systems have time synchronization service configured – called Network Time Protocol (NTP). If the NTP is not configured, there could be potential issues mentioned above. Therefore, this release of Oracle Clusterware installs a time syncing mechanism – Cluster Time Synchronization Service, or CTSS. CTSS always runs but in an "Observer" mode. Here how you check the presence of the service:
# crsctl check ctss 
   CRS-4700: The Cluster Time Synchronization Service is in Observer mode.

If the time difference appears between nodes, CTSS takes over and brings it up as the default time synchronizing service, or turns to "Active" mode. You can check for the transformation using the above command. The event is also recorded in alert log.

Enhanced Cluster Verification Utility

In this release, ECVU has been considerably enhanced with different checks as well as integrated with the Oracle installer. It also runs automatically and periodically by the Clusterware. To check for the run, use the following command:
# srvctl config cvu
   CVU is configured to run once every 360 minutes

If you want to check whether it is running and where:
# srvctl status cvu             
   CVU is enabled and running on node dantstpdb02

You can modify the interval. To make it run every 10 mins instead of 6 hours:
# srvctl modify cvu –t 10

There are several new options for CVU. Here is an example of the ACFS check
$ cluvfy stage -pre acfscfg -n dantstpdb01,dantstpdb02 -asmdev /dev/rhdiskpwer*
Performing pre-checks for ACFS Configuration 
Checking node reachability...
     … output truncated …
Task ACFS Drivers check started...
   Task ACFS Drivers check passed
Pre-check for ACFS Configuration was successful.

Here is another example to check for SCAN component setup (described earlier in this installment)
# cluvfy comp scan
Verifying scan 
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
   TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "dantstp-scan"...
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful.

Rather than explaining every possible command in CVU, which will fill several pages, I will direct you to the relevant doc. It is worth investing a little time in understanding the various options. 

Optimizer Access Paths using Indexes

Introduction In this tutorial you will use the Optimizer Access Paths for the following cases (scenarios): Case 1: With and Without Index Ca...