Thursday, March 28, 2019

Cannot Startup RAC after Patch


SYMPTOMS

After patching manually on the 12.2 Grid Infrastructure home, the rootcrs.sh -postpatch fails with:
2017-11-19 16:29:27: Oracle CRS stack has been shut down
2017-11-19 16:29:27: The stack was already down before stopping it
2017-11-19 16:29:27: Starting CRS without resources...
2017-11-19 16:29:27: OHASD needs to be up for disabling CRS resource
2017-11-19 16:29:27: Executing cmd: /u01/app/12.2.0.1/grid/bin/crsctl start crs -noautostart
2017-11-19 16:29:27: Command output:
> CRS-6706: Oracle Clusterware Release patch level ('748994161') does not match Software patch level ('0'). Oracle Clusterware cannot be started.
> CRS-4000: Command Start failed, or completed with errors.

CHANGES

 In earlier Grid Infrastructure releases, the following options were available for manual patching:
A.  In 12.1.0.x, these two commands are used with opatchauto (opatchauto will run these commands) or with manual patching with opatch or opatchauto to unlock and lock the home for patching. The -prepatch requires that the CRS be
running on both nodes. The -postpatch requires that the -prepatch was run successfully.
rootcrs.sh -prepatch
rootcrs.sh -postpatch

B.  These two commands are from previous releases of GI <12.1, although they could still be used in 12.1. The -unlock command does not require CRS be running. The -patch command does not require that unlock
was run successfully. So these two commands could work around issues with patching. This is no longer the same case in 12.2 as the -patch option no longer exists.
rootcrs.sh -unlock
rootcrs.sh -patch
In 12.2, users must use rootcrs.sh -prepatch and rootcrs.sh -postpatch for manual patching. 

CAUSE

This issue was caused by rootcrs.sh -prepatch not run successfully before patching.  The user ran rootcrs.sh -unlock because rootcrs.sh -prepatch failed, and then applied the patch manually. 
 

SOLUTION

 Please use the following steps to complete the patching:
1.  Run the following command as the root user to complete the patching set up behind the scenes:
#GI_HOME/bin:>  ./clscfg -localpatch

2.  Run the following command as the root user to lock the GI home:
#GI_HOME/crs/install:>  ./rootcrs.sh -lock

3.  Run the following command as the root user to start the GI:
#GI_HOME/bin:>  ./crsctl start crs

Thursday, March 14, 2019

Apache Software Load Balancer

Configuration Sample
<VirtualHost *:80> 
       ProxyRequests off

       ServerName cluster.goweekend.ca

       <Proxy balancer://cluster>
               # WebHead1
               BalancerMember http://48.31.108.98

               # WebHead2
               BalancerMember http://48.31.108.99


               # Security "technically we aren't blocking
               # anyone but this is the place to make
               # those changes.
               Require all granted
               # In this example all requests are allowed.

               # Load Balancer Settings
               # We will be configuring a simple Round
               # Robin style load balancer.  This means
               # that all webheads take an equal share of
               # of the load.
               ProxySet lbmethod=byrequests

       </Proxy>

       # balancer-manager
       # This tool is built into the mod_proxy_balancer
       # module and will allow you to do some simple
       # modifications to the balanced group via a gui
       # web interface.
       <Location /balancer-manager>
               SetHandler balancer-manager

               # I recommend locking this one down to your
               # your office
               Require host wkstation.goweekend.ca

       </Location>

       # Point of Balance
       # This setting will allow to explicitly name the
       # the location in the site that we want to be
       # balanced, in this example we will balance "/"
       # or everything in the site.
       ProxyPass /balancer-manager !
       ProxyPass / balancer://cluster/

</VirtualHost>

Wednesday, March 6, 2019

Tuesday, March 5, 2019

Linux Root User Name Accidentally Changed in /etc/passwd

1. Use Linux Bootable usb to boot into rescue mode, if LVM volume cannot be mounted, you can run below command:

# vgchange -a y

2. If your user has sudo privileges, you can run below command to log in and change passwd

# sudo -i -u <wrong user name>