Saturday, July 27, 2013

How to clean up the failed 11g installations

Posted by Mahalingesh On Saturday, July 27, 2013

CRS-1013:The OCR Location in an ASM Disk Group is Inaccessible on Clusterware Startup [ID 1153244.1]
CRS Does not Start after Node Reboot in 11gR2 Grid Infrastructure [ID 1215893.1]
11.2.0.1 Grid Infrastructure Installation Failed at Second Nodes While Running root.sh Due To ASM Crash Caused by lmon Timeout [ID 1239123.1]
11gr1
The Oracle provided scripts rootdelete.sh and rootdeinstall.sh remove Oracle Clusterware from your system. After running these scripts, run Oracle Universal Installer to remove the Oracle Clusterware home.
The rootdelete.sh script should be run from the Oracle Clusterware home on each node.
- It stops the Oracle Clusterware stack
- Removes inittab entries
- Deletes some of the Oracle Clusterware files.
The rootdeinstall.sh script should be run on the local node only
- Local node only after rootdelete.sh has been run on all nodes of the cluster.
- Use this command either to remove the Oracle Clusterware OCR file or to downgrade your existing installation.
- Check inittab entries before reinstall
11gr2
In Oracle 11g R2 a tool rootcrs.pl is provided allowing de-configuration without deinstall. After Oracle Clusterware is de-configured we can fix the problem and re-run the root.sh to re-start the Oracle Clusterware configuration.
• To find the problem while root.sh
- $GRID_HOME/cfgtoollogs/crsconfig/rootcrs_.log
- ASM logfiles
• As root user run
perl $GRID_HOME/crs/install/rootcrs.pl
perl $GRID_HOME/crs/install/rootcrs.pl -deconfig -force
perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
perl rootcrs.pl -deconfig -force -lastnode
Deinstall utility
The deinstall utility removes the Oracle clusterware and ASM from the server.
• $GRID_HOME/deinstall
• $GRID_HOME/deinstall -checkonly
To generate a deinstall parameter file by running the deinstall command using the -checkonly flag before you run the command to deinstall the home, or use the response file template and manually edit it to create the parameter file to use with the deinstall command.
• Check inittab entries before reinstall for standalone GRID
• Check upstart scripts before reinstall for RAC GRID
CASE 1:
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /oragrid/product/11.2/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CASE 2:
Start of resource "ora.asm" failed
CRS-2672: Attempting to start 'ora.asm' on 'LINUXb'
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/11.2.0.3/log/LINUXb/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
CRS-2674: Start of 'ora.asm' on 'LINUXb' failed
CRS-2679: Attempting to clean 'ora.asm' on 'LINUXb'
CRS-2681: Clean of 'ora.asm' on 'LINUXb' succeeded
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start ASM at /u01/app/grid/11.2.0.3/crs/install/crsconfig_lib.pm line 1272.
/u01/app/grid/11.2.0.3/perl/bin/perl -I/u01/app/grid/11.2.0.3/perl/lib -I/u01/app/grid/11.2.0.3/crs/install /u01/app/grid/11.2.0.3/crs/install/rootcrs.pl execution failed
[root@LINUXb:~]#
CASE 3:
CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'LINUXb'
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start Cluster Ready Services at /u01/app/grid/11.2.0.3/crs/install/crsconfig_lib.pm line 1286.
/u01/app/grid/11.2.0.3/perl/bin/perl -I/u01/app/grid/11.2.0.3/perl/lib -I/u01/app/grid/11.2.0.3/crs/install /u01/app/grid/11.2.0.3/crs/install/rootcrs.pl execution failedalertLINUXb.log

Friday, July 19, 2013

Oracle Restart

Posted by Mahalingesh On Friday, July 19, 2013

Grid Infrastructure is also available for installation on a stand-alone server and consists of the following:
1. Managed by the Oracle High Availability Serviced daemon OHASD
2. During the installation steps choose configure the Oracle GI for standalone Server
3. This will create OLR as there is no OCR in non-clustered environment
4. OLR location is
5. Database and Database Listeners needed to added to OLR for Oracle Restart using srvctl
6. Grid Infrastructure for a standalone server is a combination of a Clusterware subset known as Oracle Restart and ASM
7. It is started by init using the /etc/init.d/ohasd script and starts the ohasd.bin executable as root.
8. Linux /etc/inittab entry
    h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1
Benefits of using Oracle Restart:
1. Automatic resource startup at boot time without using shell scripts or the Oracle supplied dbstart and dbshut scripts in Oracle Home.
2. Resources are started in the correct sequence based on dependencies in the OLR.
3. Resources are also monitored by ohasd for availability and may be restarted in place if they fail.
4. Role managed services for Data Guard.
5. Consistency of command line interfaced tools using crsctl and srvctl as is done with clusters.
6. Adding the standby database to oracle restart
srvctl add database -d PS -o /u01/app/oracle/ps/11.2.0.3 -r PHYSICAL_STANDBY -s MOUNT -p '+DATA_PS/PS/spfilePS.ora' -t IMMEDIATE -a 'DATA_PS','FRA_PS'
[oracle@node1:~]$ srvctl config database -d PS
Database unique name: PS
Database name:
Oracle home: /u01/app/oracle/ps/11.2.0.3
Oracle user: oracle
Spfile: /u01/app/oracle/ps/11.2.0.3/dbs/spfilePS.ora
Domain:
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Database instance: PS
Disk Groups: DATA_PS,FRA_PS
Services:
[oracle@node1:~]$
7. Adding Disk Group to create dependency with database restart and disk groups in above command is particularly important as Oracle Restart will attempt to start the database even before confirming the disks are mounted.
8. Adding the listener if it is same as ASM listener and uses the different oracle home
srvctl add listener -o /u01/app/oracle/ps/11.2.0.3
Managing the Oracle Restart
1. crsctl start has – to manually start the Oracle Restart stack when running disabled or after manually stopping it
2. crsctl stop has [-f] – to manually stop the Oracle Restart stack. The -f option
3. crsctl enable has – to enable the stack for automatic startup at server reboot
4. crsctl disable has – to disable the stack for automatic startup at server reboot
5. crsctl config has - to display the configuration of Oracle Restart
6. crsctl check has – to check the current status of Restart
Some of the crsctl commands used for clusters may also be used for Oracle Restart
crsctl stat res -t – to check the status of the resources
crsctl start res – to start individual resources
crsctl stop res – to stop individual resources

Thursday, June 6, 2013

dbms_metadata()

Posted by Mahalingesh On Thursday, June 06, 2013

Users:
set pages 0 echo off termout off long 9999999 verify off feedback off
exec DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
SELECT DBMS_METADATA.GET_DDL('USER', USERNAME)
FROM DBA_USERS u
where u.username in (select username from dba_users
where username not in ('SYS','SYSTEM','OUTLN','DBSNMP'))
UNION ALL
SELECT DBMS_METADATA.GET_DDL('ROLE', ROLE)
FROM DBA_ROLES
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', USERNAME)
from dba_users u
where u.username in (select username from dba_users
where username not in ('SYS','SYSTEM','OUTLN','DBSNMP'))
and exists (select 'x' from dba_role_privs drp where drp.grantee = u.username)
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT', USERNAME)
FROM DBA_USERS u
where u.username in (select username from dba_users
where username not in ('SYS','SYSTEM','OUTLN','DBSNMP'))
and exists (select 'x' from dba_sys_privs dsp where dsp.grantee = u.username);
DB Links:
SELECT DBMS_METADATA.GET_DDL('DB_LINK',a.db_link,a.owner) FROM all_db_links a;
Tablespace:
SELECT DBMS_METADATA.GET_DDL('TABLESPACE', TABLESPACE_NAME) FROM DBA_TABLESPACES;

Friday, April 19, 2013

perl

Posted by Mahalingesh On Friday, April 19, 2013

My perl onliners

perl -pi -e 's/PROD/TEST/g' create_control_file.sql

awk

Posted by Mahalingesh On Friday, April 19, 2013

MY AWK oneliners 

dbora

Posted by Mahalingesh On Friday, April 19, 2013


#!/bin/ksh
#################################################################################
# script location for Solaris /etc/init.d/dbora 
#
#  
#################################################################################
initvar() #1#
{
SCRIPTDIR="/u01/app/oracle/admin/DBA/scripts";export SCRIPTDIR
ORACLE_HOME="/u01/app/oracle/product/11.2.0.3/";export ORACLE_HOME
SCRIPTDIR=$ORACLE_HOME/bin #default#
ORA_OWNER="oracle";export ORA_OWNER
ORACLE_BASE=${ORACLE_BASE:=/u01/app/oracle};export ORACLE_BASE
LOG=${ORACLE_BASE}/admin/DBA/log/dbora.$1.$$.log;export LOG
HOST_TYPE="`uname`"
if [ ${HOST_TYPE} = "AIX" -o ${HOST_TYPE} = "HP-UX" ]
then
    ORATABDIR=${TNS_ADMIN:=/etc}
else
    ORATABDIR=${TNS_ADMIN:=/var/opt/oracle}
fi
}
logmsg() #2#
{
echo "`date '+%m/%d/%Y %T'` $1" | tee -a ${LOG}
}
start_all() #3#
{
if [ -f ${SCRIPTDIR}/dbstart ]
then
    su - ${ORA_OWNER} -c "${SCRIPTDIR}/dbstart $ORACLE_HOME" | tee -a ${LOG} &
else
    logmsg "ERROR dbstart script is missing.  Databases not started"
fi
        su - $ORA_OWNER -c "$ORACLE_HOME/bin/lsnrctl start"
        touch /var/lock/subsys/dbora
}
stop_all() 
{
if [ -f ${SCRIPTDIR}/dbshut ]
then
    su - ${ORA_OWNER} -c "${SCRIPTDIR}/dbshut $ORACLE_HOME" | tee -a ${LOG} &
else
    logmsg "ERROR dbstop script is missing.  Databases not shutdown cleanly"
fi
        su - $ORA_OWNER -c "$ORACLE_HOME/bin/lsnrctl stop"
        touch /var/lock/subsys/dbora
}
#main#
initvar "$1"
case "$1" in
    'start')
            start_all
            ;;
    'stop')
            stop_all
            ;;
esac
exit 0
#################################################################################
# Add this scripts in init.d 
# Solaris ln -s /etc/init.d/dbora /etc/rc0.d/K01dbora
# ln -s /etc/init.d/dbora /etc/rc3.d/S99dbora
#################################################################################
# Test this scripts as root - then test with reboot
# Solaris su - ${ORA_OWNER} -c "${SCRIPTDIR}/dbstart $ORACLE_HOME"
# su - ${ORA_OWNER} -c "${SCRIPTDIR}/dbstop $ORACLE_HOME"
#################################################################################

Monday, April 1, 2013

Database Roll Switchover and Failover

Posted by Mahalingesh On Monday, April 01, 2013



Procedures

The physical standby instance will be startup and shutdown by the VCS.  If you want to do it manually for maintenance, please follow the Scenario 1.

The standby database is normally operated in Managed recovery mode.   In managed recovery mode, the primary database archives redo logs to the standby database and log apply services automatically apply the redo logs to the standby database.

my.txt 

Look like