Tuesday, December 31, 2019

Kubernetes: Journal

[root@k8s-master ~]# systemctl restart docker && systemctl enable docker
[root@k8s-master ~]# systemctl  restart kubelet && systemctl enable kubelet

  999  swapoff -a
 1000  yum list --showduplicates kubeadm --disableexcludes=kubernetes
 1001  yum install -y kubeadm-1.17.0-0 --disableexcludes=kubernetes
 1002  kubeadm version
 1003  kubectl drain $CP_NODE --ignore-daemonsets
 1004  kubectl get nodes
 1005  kubectl drain expc2018
 1006  kubectl describe nodes
 1007  kubectl describe nodes

 1002  export kubever=$(kubectl version | base64 | tr -d '\n')
 1003  kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

 kubeadm token generate

kubeadm join 137.15.210.118:6443 --token ctrs8q.wr3dnzzs3awh1oz3 \
    --discovery-token-ca-cert-hash sha256:78d1e52f37983d795be38ace45f8e1fa8d0eda2c8e9316b94268ad5cf0a8e980


https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2

### Add Docker repository.
yum-config-manager --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum update && yum install \
  containerd.io-1.2.10 \
  docker-ce-19.03.4 \
  docker-ce-cli-19.03.4

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

Kubernetes: Init Log

# kubeadm init
W1230 11:40:24.994597    2582 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1230 11:40:24.994684    2582 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
        [WARNING HTTPProxy]: Connection to "https://137.15.210.118" uses proxy "http://proxy.csd.toronto.ca:8888". If that is not intended, adjust your proxy settings
        [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://proxy.csd.toronto.ca:8888". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [expc2018 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 137.15.210.118]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [expc2018 localhost] and IPs [137.15.210.118 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [expc2018 localhost] and IPs [137.15.210.118 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1230 11:41:11.035610    2582 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1230 11:41:11.036793    2582 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.507597 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node expc2018 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node expc2018 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ctrs8q.wr3dnzzs3awh1oz3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 137.15.210.118:6443 --token ctrs8q.wr3dnzzs3awh1oz3 \
    --discovery-token-ca-cert-hash sha256:78d1e52f37983d795be38ace45f8e1fa8d0eda2c8e9316b94268ad5cf0a8e980


Kubernetes Nodes: Ready,SchedulingDisabled

Check nodes status

# kubectl get nodes
NAME       STATUS                     ROLES    AGE   VERSION
srv2020   Ready                      master   24h   v1.17.0
srv2021   Ready,SchedulingDisabled   <none>   23h   v1.17.0

Remove a node from service

# kubectl drain srv2021

Put a node back to service

# kubectl uncordon srv2021
node/srv2021 uncordoned

Kubernetes: lookup registry-1.docker.io on 17.15.20.19:53: no such host

Problem:
Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 17.15.20.19:53: no such host

Reason:
Docker running behind proxy

Solution:
https://docs.docker.com/config/daemon/systemd/

# cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://proxy.goweekend.ca:3288/"
Environment="HTTPS_PROXY=http://proxy.goweekend.ca:3288/"

failed to find subsystem mount for required subsystem: pids


Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids

# pwd
/var/lib/kubelet
# cat kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --cgroups-per-qos=false --enforce-node-allocatable="

Monday, December 30, 2019

NetworkReady=false


Problem: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized #1031



Fixed by downloading portmap to /opt/cni/bin
https://github.com/projectcalico/cni-plugin/releases/download/v1.9.1/portmap

Wednesday, December 18, 2019

Github Beginner

You have an empty repository
To get started you will need to run these commands in your terminal.

New to Git? Learn the basic Git commands
Configure Git for the first time
git config --global user.name "System Administrator"
git config --global user.email "gitadmin@goweekend.ca"
Working with your repository
I just want to clone this repository
If you want to simply clone this empty repository then run this command in your terminal.

git clone https://sysadmin@git.goweekend.ca/scm/ar/arttest.git
My code is ready to be pushed
If you already have code ready to be pushed to this repository then run this in your terminal.

cd existing-project
git init
git add --all
git commit -m "Initial Commit"
git remote add origin https://sysadmin@git.goweekend.ca/scm/ar/arttest.git
git push -u origin master
My code is already tracked by Git
If your code is already tracked by Git then set this repository as your "origin" to push to.

cd existing-project
git remote set-url origin https://sysadmin@git.goweekend.ca/scm/ar/arttest.git
git push -u origin --all
git push origin --tags



The rm -r command will recursively remove your folder:
git rm -r folder-name
Commit the change:
git commit -m "Remove duplicated directory"
Push the change to your remote repository:

git push origin master

Tuesday, December 17, 2019

Oracle RAC Troubleshooting


Login as root
crsctl check cluster -all
crsctl stat res -t -init

crsctl stop crs
crsctl start crs

srvctl config scan
srvctl config scan_listener


ps -elf |grep tns

script /var/tmp/`hostname`_listener_status.txt
lsnrctl status LISTENER
lsnrctl status LISTENER_SCAN1
lsnrctl status ASMNET1LSNR_ASM
lsnrctl service LISTENER
lsnrctl service LISTENER_SCAN1
lsnrctl service ASMNET1LSNR_ASM
exit

script /var/tmp/`hostname`_listener_status.txt

script /var/tmp/`hostname`_listener_status_`date +%s`.txt
find . -mmin -20 -type f -exec zip -r /var/tmp/`hostname`_logs.zip {} \;

zip -r /var/tmp/`hostname`_logs.zip $(find . -mmin -20 -type f)

Monday, December 16, 2019

SQLPLUS debug mode


Insert below entries in sqlnet.ora on client machine.


DIAG_ADR_ENABLED = OFF
TRACE_LEVEL_CLIENT = SUPPORT
TRACE_DIRECTORY_CLIENT = /var/tmp/sqlplus
TRACE_TIMESTAMP_CLIENT = ON

Tuesday, December 10, 2019

Oracle Database Listener TCP Validation

https://blog.dbi-services.com/oracle-12cr2-dataguard-and-tcp-valid_node_checking/

Standalone $ORACLE_HOME/network/admin
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME, EZCONNECT)

SQLNET.ALLOWED_LOGON_VERSION_SERVER = 11
ADR_BASE = /usr2/app/oracle
tcp.validnode_checking = yes
#tcp.invited_nodes = (8.5.2.163, 8.5.2.7,)
tcp.invited_nodes = (127.0.0.1, 8.5.2.163, 8.5.2.7, 8.5.19.50)
tcp.excluded_nodes = (8.5.2.165)

RAC $GRID_HOME/network/admin
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME, EZCONNECT)

SQLNET.ALLOWED_LOGON_VERSION_SERVER = 11
ADR_BASE = /usr2/app/oracle
tcp.validnode_checking = yes
# in subnet fasion:
#tcp.invited_nodes = (8.5.2.163/24, 8.5.2.7/24,)
tcp.invited_nodes = (127.0.0.1, 8.5.2.163, 8.5.2.7, 8.5.19.50)
tcp.excluded_nodes = (8.5.2.165)

Thursday, November 28, 2019

Bi-Direction Authentication with Apache and Curl

Server: apache01.goweekend.ca
Client: client01.goweekend.ca

Request Server Certificate for apache01.goweekend.ca, apache01.pem
apache01.key.encrypted, apache01.key

Request Client or Server-Client Certificate for client01.goweekend.ca, client01.pem
client01.key.encrypted, client01.key

Download Root or Sub Certificates

ca.pem
sub.pem


cat sub.pem > server-full-chain.pem
cat ca.pem >> server-full-chain.pem

cat client01.pem > client01-full-chain.pem
cat sub.pem >> client01-full-chain.pem
cat ca.pem >> client01-full-chain.pem


LISTEN 8443
LogLevel debug
<VirtualHost *:8443>
  DocumentRoot "/usr/share/helloworld"
  ServerName apache01.goweekend.ca:3443

  ServerAdmin fei@goweekend.ca
  SSLEngine on
  SSLCertificateFile /etc/httpd/certs/apache01.crt
  SSLCertificateKeyFile /etc/httpd/certs/apache01.key

  SSLVerifyClient require
  SSLVerifyDepth 10
  SSLCACertificateFile /etc/httpd/certs/cot-full-chain.pem
  <location />
    Order allow,deny
    allow from all
    ##SSLRequire (%{SSL_CLIENT_S_DN_OU} eq "risk")
   SSLRequire (%{SSL_CLIENT_S_DN_OU} eq "risk" or %{SSL_CLIENT_S_DN_CN} in {"mjackson", "jsina"})

    ###SSLRequire (%{SSL_CLIENT_S_DN_CN} eq "client01.goweekend.ca")
 </location>
  CustomLog /var/log/httpd/goweekend_ssl.log \
    "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b \"%{SSL_CLIENT_S_DN_CN}x\""

  #ProxyPass / http://127.0.0.1/
  #ProxyPassReverse / http://127.0.0.1/
</VirtualHost>

###################################################
SSLRequire Sample

    #SSLRequire (    %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \
    #            and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
    #            and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
    #            and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \
    #            and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20       ) \
    #           or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/
    #</Location>


# curl -vv --cert /root/certs/client01.pem --cacert /root/certs/client01-full-chain.pem  --key /root/certs/dynamics.key https://github.csd.toronto.ca:3443


Notes:
1. make sure the merged certificates separated lines on different line2
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----

2. remove special character in the Certificates and CA Certifactes, especially the file is created in windows, then transfer to unix/linux

Friday, November 22, 2019

Enable Yum Repository in Command Line

List sharutils

# yum --enablerepo=* list sharutils

Install sharutils

# yum --enablerepo=*  install docker-engine


Wednesday, October 16, 2019

Clone Solaris Guest DOM to another CDOM

Clone Guest DOM to another CDOM

1. Shutdown source Guest DOM
2. Create snapshot of source guest DOM disk
zfs snapshot -r rpool/vdisks/gdom001/gdom001-disk0@clone
3. Send source snapshot to destination, destination pool and zfs can be redefined
On destination:
 zfs create -V 100gb vpool/vdisks/gdom002/gdom002-disk0

On source
 zfs send rpool/vdisks/gdom001/gdom001-disk0@clone | ssh dsthost zfs recv -F vpool/vdisks/gdom002/gdom002-disk0

4. Create new guest DOM with the disk cloned
# ldm add-domain gdom002
add whole core
# ldm set-core 1 gdom002

# ldm add-mem 32G gdom002
# ldm add-vnet vnet0 primary-vsw0 gdom002
# ldm add-vdsdev /dev/zvol/dsk/vpool/vdisks/gdom002/gdom002-disk0 gdom002_disk0@primary-vds0
# ldm add-vdisk disk0 gdom002_disk0@primary-vds0 gdom002
# zfs snapshot vpool/vdisks/gdom002-disk0@version_1
# ldm set-var auto-boot\?=true gdom002
# ldm set-var boot-device=disk0 gdom002
# ldm bind-domain gdom002
# ldm list-domain gdom002
# ldm start gdom002

Friday, October 11, 2019

RedHat 7 Rejoin AD Domain

RedHat 7

# Unjoin AD Directory
realm leave org.ad.goweekend.ca

# Join AD Directory
realm join org.ad.goweekend.ca -U <user Name>

systemctl restart smb
systemctl restart nmb
systemctl restart sssd

Solaris PF Flows


Packet Processing


image:Graphic shows the flow of a packet through the OpenBSD Packet Firewall.


Thursday, October 3, 2019

Check URL Status with Curl

if [ $# -lt 1 ]; then
  echo "%%TCS-E-PARAMETER, url is not found."
  exit $LINENO
fi

tgtUrl=$1

urlRexp="https{0,1}://.{1}"

if [[ ! $tgtUrl =~ $urlRexp ]]; then
  echo "%%TCS-E-PARAMETER, target url must start with https://"
  exit $LINENO
fi

pid=$$


checkStatus=`curl -o /dev/null --silent --head --write-out '%{http_code}\n' $tgtUrl`

case $checkStatus in
        2*)
                echo "$checkStatus OK: $tgtUrl"
                exit 0
                ;;
        3*)
                echo "$checkStatus Redirection Happened to reach: $tgtUrl"
                exit 0
                ;;
        4*)
                echo "$checkStatus Client error to reach: $tgtUrl"
                exit $LINENO
                ;;
        5*)
                echo "$checkStatus Server error to reach: $tgtUrl"
                exit $LINENO
                ;;
        *)
                echo "$checkStatus No valid HTTP response code to: $tgtUrl"
                exit $LINENO
                ;;
esac

Friday, September 27, 2019

Migrate SVN to GIT

Reference: https://www.atlassian.com/git/tutorials/svn-to-git-prepping-your-team-migration

java -jar ../svn-migration-scripts.jar authors http://svn.goweekend.ca/ithelpdesk/  > authors.txt

git svn clone --stdlayout --authors-file=/svn/svn2git/migration/authors.txt http://svn.goweekend.ca/myrepo/ myrepo

Wednesday, September 25, 2019

M4000: Firmware Update

XSCF> version -c xcp -v
XSCF> getflashimage http://firmwares.goweekend.ca/m4000/FFXCP1123.tar.gz
* getaddrinfo(3) failed for vsun07.csd.toronto.ca:80
* Closing connection #0
Error: could not resolve host.

In above case, use ip address instead. 

XSCF> flashupdate -c check -m xcp -s 1123
XSCF> flashupdate -c update -m xcp -s 1123

Tuesday, September 24, 2019

Solaris 10: ISO mount: Not a Multiple of 512

# mount -F hsfs -o ro `lofiadm -a /var/tmp/sol-10-u11-ga-sparc-dvd.iso` /mnt
lofiadm: size of /var/tmp/sol-10-u11-ga-sparc-dvd.iso is not a multiple of 512
mount: Mount point cannot be determined

Solution:

# dd if=/var/tmp/sol-10-u11-ga-sparc-dvd.iso of=sol-10-512.iso obs=512 conv=sync
1264889+1 records in
1264890+0 records out


# mount -F hsfs -o ro `lofiadm -a /var/tmp/sol-10-512.iso` /mnt

Friday, September 20, 2019

Maximize the Performance of Your Virtual Network

Configuring Your Domains to Maximize the Performance of Your Virtual Network

In previous versions of Oracle VM Server for SPARC and the Oracle Solaris OS, you could improve your network performance by configuring jumbo frames. This configuration is no longer required and unless required for another reason, using the standard MTU value of 1500 for your service and guest domains is best.
To achieve the improved networking performance, set the extended-mapin-space property to on for the service domain and the guest domains, which is the default setting for the Oracle VM Server for SPARC 3.1 software and supported system firmware.
primary# ldm set-domain extended-mapin-space=on domain-name
To check the extended-mapin-space property value, run the following command:
primary# ldm ls -l domain-name |grep extended-mapin
extended-mapin-space=on


Note - A change to the extended-mapin-space property value triggers a delayed reconfiguration on the primary domain. This situation requires a primary domain reboot. You also must first stop the guest domains before you change this property value.

Solaris 10: Apply Patch 10_Recomended

Following patch failed to apply :
 126546-10

Aborting due to failure while applying patch 126546-10.

Application of this patch should have succeeded - this failure is unexpected.
Please assess cause of failure and verify system integrity before proceeding.

Install log files written :
  /var/sadm/install_data/s10s_rec_patchset_short_2019.09.20_19.55.57.log
  /var/sadm/install_data/s10s_rec_patchset_verbose_2019.09.20_19.55.57.log
  /var/sadm/install_data/s10s_rec_patchset_failed_2019.09.20_19.55.57.log
  /var/sadm/install_data/_patchadd_2019.09.20_19.55.57.log
  /var/sadm/install_data/_patchadd_subproc_2019.09.20_19.55.57.log


IDR151577-01




# patchrm IDR151577-01

# patchrm IDR152812-02
Verify that the patch is removed.

147194-03  Obsoleted by: 148023-03 SunOS 5.10: bmc patch
148023-03 Obsoleted by: 148023-04 SunOS 5.10: bmc patch


# patchadd -p | grep IDR151577-01

# patchadd -p | grep 147194-03

147194-03  Obsoleted by: 148023-03 SunOS 5.10: bmc patch


Following patch failed to apply :
 147793-23

Aborting due to failure while applying patch 147793-23.

Application of this patch should have succeeded - this failure is unexpected.
Please assess cause of failure and verify system integrity before proceeding.

Install log files written :
  /var/sadm/install_data/s10s_rec_patchset_short_2019.09.20_22.44.30.log
  /var/sadm/install_data/s10s_rec_patchset_verbose_2019.09.20_22.44.30.log
  /var/sadm/install_data/s10s_rec_patchset_failed_2019.09.20_22.44.30.log
  /var/sadm/install_data/_patchadd_2019.09.20_22.44.30.log
  /var/sadm/install_data/_patchadd_subproc_2019.09.20_22.44.30.log
c

Friday, August 16, 2019

Thursday, August 8, 2019

Encrypt Communication between Mongodb Server and Clients


Step 1. Generate Root Certificate

#!/bin/bash

mkdir -p server client
caFile=ca.pem
caKeyDB=privateKey.pem
serverConfig=server-self-signed-cert.req
clientConfig=client-self-signed-cert.req
serverCSR=server/server.req
clientCSR=client/client.req
encryptedServerKeyFile=server/encrypted-server.key
encryptedClientKeyFile=client/encrypted-client.key
serverCert=server/server.crt
clientCert=client/client.crt
serverNonEncryptedKey=server/server.key
clientNonEncryptedKey=client/client.key
mongoServerKeys=server/mongodbServer.pem
mongoClientKeys=client/mongodbClient.pem

# Generate CA Key Database and CA File, i.e. privkey.pem & ca.pem

openssl req -out $caFile -keyout $caKeyDB -new -x509 -days 3650  -subj  "/C=CA/ST=ON/L=TORONTO/O=GOWEEKEND/CN=root/emailAddress=sysadmin@goweekend.ca"


#Generate Server Key DB
openssl genrsa -out $serverNonEncryptedKey 2048

#Generate Client Key DB
openssl genrsa -out $clientNonEncryptedKey 2048

# Generate Server CSR
openssl req -key $serverNonEncryptedKey -new -out $serverCSR  -subj  "/C=CA/ST=ON/L=TORONTO/O=GOWEEKEND/CN=127.0.0.1/emailAddress=sysadmin@goweekend.ca"

# Generate Client CSR
openssl req -key $clientNonEncryptedKey -new -out $clientCSR  -subj  "/C=CA/ST=ON/L=TORONTO/O=GOWEEKEND/CN=127.0.0.1/emailAddress=sysadmin@goweekend.ca"

# Generate Server Certificate
openssl x509 -req -in $serverCSR -CA $caFile -CAkey $caKeyDB -CAserial file.srl -out $serverCert -days 3650

# Generate Client Certificate
openssl x509 -req -in $clientCSR -CA $caFile -CAkey $caKeyDB -CAserial file.srl -out $clientCert -days 3650

# Merge Private/Public Keys
cat $serverNonEncryptedKey $serverCert > $mongoServerKeys
cat $clientNonEncryptedKey $clientCert > $mongoClientKeys

# Verify the generated certificates
openssl verify -CAfile $caFile $mongoServerKeys
openssl verify -CAfile $caFile $mongoClientKeys

Step 2: Configure Mongodb
# cat  /etc/mongod.conf
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
storage:
  dbPath: /data/mongodb
  journal:
    enabled: true
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo
net:
  port: 27017
  bindIp: 127.0.0.1  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
  ssl:
      ###certificateSelector: <string>
      mode: requireSSL
      PEMKeyFile: /etc/mongodb/ssl/mongodbServer.pem
      ###PEMKeyPassword: csis2006
      CAFile: /etc/mongodb/ssl/ca.pem

Step 3: Start up MongoDB
$ cat x509MongoStart.sh
#!/bin/bash

cd /data/mongodb
mongod -f /etc/mongod.conf &

Step 4: Connect to MongoDB
$ cat mongoClient.sh
unset HTTP_PROXY
unset HTTPS_PROXY
mongo --ssl --sslCAFile /etc/mongodb/ssl/ca.pem --sslPEMKeyFile /etc/mongodb/ssl/mongodbClient.pem

Wednesday, July 17, 2019

Encrypt and decrypt file between openssl 1.0.1 and 1.1.1

Encryption is done on sun10 with OpenSSL 1.0.1p, Decryption is don on Linux with OpenSSL 1.1.1c 
Generate Random Pass File
$ openssl rand -base 32 > ~/.ssh/passfile
Deliver the pass file to the client which will decrypt the encrypted file, and change the permission to 400
$ chmod 400 passfile
Encrypt File
$ openssl enc -aes-256-cbc -pass file:encryption/passfile -salt -in test.txt -out test.txt.aes

Decrypt File
openssl enc -d -aes-256-cbc -pass file:encryption/passfile -in test.txt.aes -out test.txt -md md5

Tuesday, July 9, 2019

Match first occurrence

https://superuser.com/questions/266732/matching-only-the-first-occurrence-in-a-line-with-regex
^([^,]+),
That means
^        starts with
[^,]     anything but a comma
+        repeated one or more times (use * (means zero or more) if the first field can be empty)
([^,]+)  remember that part
,        followed by a comma

Wednesday, July 3, 2019

Plugins Used in Jenkins

ace-editor.jpi
active-directory.jpi
analysis-core.jpi
antisamy-markup-formatter.jpi
ant.jpi
apache-httpcomponents-client-4-api.jpi
authentication-tokens.jpi
bitbucket-build-status-notifier.jpi.tmp
bitbucket.jpi
bitbucket-pullrequest-builder.jpi.tmp
bouncycastle-api.jpi
branch-api.jpi
build-timeout.jpi
clearcase.jpi
cloudbees-bitbucket-branch-source.jpi
cloudbees-folder.jpi
command-launcher.jpi
conditional-buildstep.jpi
credentials-binding.jpi
credentials.jpi
dependency-check-jenkins-plugin.jpi
dependency-track.jpi
display-url-api.jpi
docker-commons.jpi
docker-workflow.jpi
durable-task.jpi
email-ext.jpi
external-monitor-job.jpi
flyway-runner.jpi
generic-webhook-trigger.jpi
git-client.jpi
github-api.jpi
github-branch-source.jpi
github.jpi
git.jpi
git-server.jpi
gradle.jpi
handlebars.jpi
handy-uri-templates-2-api.jpi
htmlpublisher.jpi
jackson2-api.jpi
javadoc.jpi
jdk-tool.jpi
job-dsl.jpi
jquery-detached.jpi
jquery.jpi
jquery-ui.jpi
jsch.jpi
junit.jpi
ldap.jpi
lockable-resources.jpi
mailer.jpi
mapdb-api.jpi
matrix-auth.jpi
matrix-project.jpi
maven-plugin.jpi
mercurial.jpi
momentjs.jpi
monitoring.jpi
multiple-scms.jpi.tmp
nexus-jenkins-plugin.jpi
pam-auth.jpi
parameterized-trigger.jpi
pipeline-build-step.jpi
pipeline-github-lib.jpi
pipeline-graph-analysis.jpi
pipeline-input-step.jpi
pipeline-milestone-step.jpi
pipeline-model-api.jpi
pipeline-model-declarative-agent.jpi
pipeline-model-definition.jpi
pipeline-model-extensions.jpi
pipeline-rest-api.jpi
pipeline-stage-step.jpi
pipeline-stage-tags-metadata.jpi
pipeline-stage-view.jpi
plain-credentials.jpi
postbuild-task.jpi
publish-over.jpi
publish-over-ssh.jpi
resource-disposer.jpi
role-strategy.jpi
run-condition.jpi
scm-api.jpi
script-security.jpi
sonar.jpi
ssh-credentials.jpi
ssh-slaves.jpi
stashNotifier.jpi
structs.jpi
subversion.jpi
text-finder.jpi
thucydides.jpi
timestamper.jpi
token-macro.jpi
windows-slaves.jpi
workflow-aggregator.jpi
workflow-api.jpi
workflow-basic-steps.jpi
workflow-cps-global-lib.jpi
workflow-cps.jpi
workflow-durable-task-step.jpi
workflow-job.jpi
workflow-multibranch.jpi
workflow-scm-step.jpi
workflow-step-api.jpi
workflow-support.jpi
ws-cleanup.jpi

Jenkins Startup with Parameters

# grep -v ^# /etc/sysconfig/jenkins
JENKINS_HOME="/var/lib/jenkins"


JENKINS_JAVA_CMD="/var/lib/jenkins/jre/bin/java"

JENKINS_USER="jenkins"


JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djavax.net.ssl.trustStore=/var/lib/jenkins/.keystore/cacerts -Djavax.net.ssl.trustStorePassword=changeit"

JENKINS_PORT="8080"

JENKINS_LISTEN_ADDRESS=""

JENKINS_HTTPS_PORT=""

JENKINS_HTTPS_KEYSTORE=""

JENKINS_HTTPS_KEYSTORE_PASSWORD=""

JENKINS_HTTPS_LISTEN_ADDRESS=""


JENKINS_DEBUG_LEVEL="5"

JENKINS_ENABLE_ACCESS_LOG="no"

JENKINS_HANDLER_MAX="100"

JENKINS_HANDLER_IDLE="20"


JENKINS_ARGS="--httpsKeyStore=/var/lib/jenkins/jre/lib/security/castore --httpsKeyStorePassword=changeit "

Friday, June 14, 2019

Docker Journal

curl -L https://github.com/docker/compose/releases/download/1.25.0-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

https://github.com/nanoninja/docker-nginx-php-mongo


https://computingforgeeks.com/how-to-install-docker-on-fedora/

ERROR: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 111.115.120.19:53: no such host
Setup proxy for docker
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

docker stop $(docker ps -aq)
docker rm $(docker ps -aq)

docker run -d -p 8080:80 -v ${PWD}/web:/usr/share/nginx/html --name artnginx nginx

docker exec -it artnginx bash

Monday, June 10, 2019

inotify in Linux

https://www.linuxjournal.com/content/linux-filesystem-events-inotify

Sunday, June 9, 2019

Install Docker

curl -sSL https://get.docker.com/ | sh  but you can also install in a more manual method by following specific instructions on the Docker Store for your distribution, like this one for Ubuntu.

Thursday, May 30, 2019

Install NPM, GULP on Linux

yum install npm

NPM

npm install -g express
npm install -g gulp gulp-cli typescript gulp-typescript ts-node
--------------------------------------------------------
For Enterprise Linux - Oracle Linux
Get latest epel repo.
wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install nodejs npm
Set Proxy before installing above:
npm config set strict-ssl false
npm config set registry "http://registry.npmjs.org/"
npm config set proxy http://proxy.ibm.coim:8080
npm config set https-proxy http://proxy.ibm.coim:8080

Use n to manage node version
# node -v
v10.15.3

Logout and login again:
# node -v
v8.11.1

Tuesday, May 7, 2019

Solaris 11 Encrypt File

pktool genkey keystore=file outkey=encryptKey keylen=128
pktool list keystore=file objtype=key infile=encryptKey
encrypt -a aes -k ~/.ssh/encryptKey -i test.txt -o test.txt.aes
scp encryptKey testuser@sun09:/.ssh
decrypt -a aes -k ~/.ssh/encryptKey -i /bkup/encrypted/test.txt.aes -o /var/tmp/test.txt.aes

Monday, May 6, 2019

Package list required to run GUI on Oracle/RedHat Linux 7.5 +

libappindicator-gtk3
libreport-gtk
gtk3
webkitgtk4
gtk-update-icon-cache
usermode-gtk
libnm-gtk
ibus-gtk2
libcanberra-gtk2
gtkmm24
caribou-gtk2-module
ibus-gtk3
webkitgtk4-jsc
webkitgtk3
gtkspell3
libindicator-gtk3
libchamplain-gtk
avahi-ui-gtk3
libdbusmenu-gtk3
gtk2-immodule-xim
webkitgtk4-plugin-process-gtk2
spice-gtk3
gtk2
authconfig-gtk
xdg-user-dirs-gtk
pygtk2
pygtk2-libglade
gtk3-immodule-xim
libpeas-gtk
gtkmm30
gtksourceview3
colord-gtk
pinentry-gtk
caribou-gtk3-module
PackageKit-gtk3-module
clutter-gtk
libcanberra-gtk3
adwaita-gtk2-theme
gtk-vnc2

Monday, April 22, 2019

Solaris 11 SSH Login Slowness

Reference: https://thomas.gouverneur.name/2012/04/20120412ssh-connection-to-solaris-11-is-sometimes-slow/


User complained slowness after they type in command "ssh <uid>@host".

Remove GSSAPI and DNS on /etc/ssh/sshd_config .

LookupClientHostnames no 
VerifyReverseMapping no 
GSSAPIAuthentication no

Thursday, March 28, 2019

Cannot Startup RAC after Patch


SYMPTOMS

After patching manually on the 12.2 Grid Infrastructure home, the rootcrs.sh -postpatch fails with:
2017-11-19 16:29:27: Oracle CRS stack has been shut down
2017-11-19 16:29:27: The stack was already down before stopping it
2017-11-19 16:29:27: Starting CRS without resources...
2017-11-19 16:29:27: OHASD needs to be up for disabling CRS resource
2017-11-19 16:29:27: Executing cmd: /u01/app/12.2.0.1/grid/bin/crsctl start crs -noautostart
2017-11-19 16:29:27: Command output:
> CRS-6706: Oracle Clusterware Release patch level ('748994161') does not match Software patch level ('0'). Oracle Clusterware cannot be started.
> CRS-4000: Command Start failed, or completed with errors.

CHANGES

 In earlier Grid Infrastructure releases, the following options were available for manual patching:
A.  In 12.1.0.x, these two commands are used with opatchauto (opatchauto will run these commands) or with manual patching with opatch or opatchauto to unlock and lock the home for patching. The -prepatch requires that the CRS be
running on both nodes. The -postpatch requires that the -prepatch was run successfully.
rootcrs.sh -prepatch
rootcrs.sh -postpatch

B.  These two commands are from previous releases of GI <12.1, although they could still be used in 12.1. The -unlock command does not require CRS be running. The -patch command does not require that unlock
was run successfully. So these two commands could work around issues with patching. This is no longer the same case in 12.2 as the -patch option no longer exists.
rootcrs.sh -unlock
rootcrs.sh -patch
In 12.2, users must use rootcrs.sh -prepatch and rootcrs.sh -postpatch for manual patching. 

CAUSE

This issue was caused by rootcrs.sh -prepatch not run successfully before patching.  The user ran rootcrs.sh -unlock because rootcrs.sh -prepatch failed, and then applied the patch manually. 
 

SOLUTION

 Please use the following steps to complete the patching:
1.  Run the following command as the root user to complete the patching set up behind the scenes:
#GI_HOME/bin:>  ./clscfg -localpatch

2.  Run the following command as the root user to lock the GI home:
#GI_HOME/crs/install:>  ./rootcrs.sh -lock

3.  Run the following command as the root user to start the GI:
#GI_HOME/bin:>  ./crsctl start crs

Thursday, March 14, 2019

Apache Software Load Balancer

Configuration Sample
<VirtualHost *:80> 
       ProxyRequests off

       ServerName cluster.goweekend.ca

       <Proxy balancer://cluster>
               # WebHead1
               BalancerMember http://48.31.108.98

               # WebHead2
               BalancerMember http://48.31.108.99


               # Security "technically we aren't blocking
               # anyone but this is the place to make
               # those changes.
               Require all granted
               # In this example all requests are allowed.

               # Load Balancer Settings
               # We will be configuring a simple Round
               # Robin style load balancer.  This means
               # that all webheads take an equal share of
               # of the load.
               ProxySet lbmethod=byrequests

       </Proxy>

       # balancer-manager
       # This tool is built into the mod_proxy_balancer
       # module and will allow you to do some simple
       # modifications to the balanced group via a gui
       # web interface.
       <Location /balancer-manager>
               SetHandler balancer-manager

               # I recommend locking this one down to your
               # your office
               Require host wkstation.goweekend.ca

       </Location>

       # Point of Balance
       # This setting will allow to explicitly name the
       # the location in the site that we want to be
       # balanced, in this example we will balance "/"
       # or everything in the site.
       ProxyPass /balancer-manager !
       ProxyPass / balancer://cluster/

</VirtualHost>

Wednesday, March 6, 2019

Tuesday, March 5, 2019

Linux Root User Name Accidentally Changed in /etc/passwd

1. Use Linux Bootable usb to boot into rescue mode, if LVM volume cannot be mounted, you can run below command:

# vgchange -a y

2. If your user has sudo privileges, you can run below command to log in and change passwd

# sudo -i -u <wrong user name>

   

Wednesday, February 27, 2019

Project configuration for Oracle 12c on Solaris 11.3

# projmod -s -K "process.max-sem-nsems=(privileged,300,deny);process.max-address-space=(privileged,16G,deny);process.max-file-descriptor=(basic,65535,deny);process.max-stack-size=(basic,15728640,deny);project.max-shm-memory=(priv,16G,deny)" user.oracle
# projadd -s -K "process.max-sem-nsems=(privileged,300,deny);process.max-address-space=(privileged,16G,deny);process.max-file-descriptor=(basic,65535,deny);process.max-stack-size=(basic,15728640,deny);project.max-shm-memory=(priv,16G,deny)" user.oracle

Wednesday, January 30, 2019

Understand WebSphere GC


What you are seeing is the standard 'sawtooth' pattern of GC. The heap is made up of the nursery and tenured areas. In the nursery the GC performs whats called a scavenge operation
which is a small gc only within the nursery area which will likely account for the smaller jumps you observed in the graph. When you click perform GC it will generate a 'full gc' which
will instigate both a scavenge operation on the nursery area and a mark sweep compact on the tenured area which is why there was a sudden drop in the graph at that time.

GC would have performed that full gc operation as and when it was required and up until then it had not been required and it only happened because you forced a full gc.

It is important to understand that the heap is there to be used so if you see it well used this is a good thing as long as it is performing ok.

Wednesday, January 23, 2019

Oracle RAC Trouble Shooting

$GRID_HOME/bin/crsctl query crs softwarepatch -l
$GRID_HOME/bin/crsctl query crs releasepatch
$GRID_HOME/bin/crsctl query crs releasepatch
$GRID_HOME/bin/crsctl check crs
$GRID_HOME/bin/crsctl stop crs -f
$GRID_HOME/bin/crsctl check crs
$GRID_HOME/bin/crsctl stat res ora.drivers.acfs -init
$GRID_HOME/bin/crsctl stat res ora.drivers.acfs -init
$GRID_HOME/bin/crsctl enable crs
$GRID_HOME/bin/crsctl enable crs .... success.
$GRID_HOME/bin/crsctl start crs -wait
$GRID_HOME/bin/crsctl check crs
$GRID_HOME/bin/crsctl stop rollingpatch
$GRID_HOME/bin/crsctl query crs activeversion -f
$GRID_HOME/bin/crsctl query crs activeversion -f
$GRID_HOME/bin/crsctl stat res ora.drivers.acfs -init
$GRID_HOME/bin/crsctl stat res ora.drivers.acfs -init
$GRID_HOME/bin/crsctl check crs
$GRID_HOME/bin/crsctl query crs releasepatch
$GRID_HOME/bin/crsctl query crs activeversion -f


crsctl status res -t 
crsctl status res -t -init 
<GI_HOME>/bin/kfod op=patches 
<GI_HOME>/bin/kfod op=patchlvl 
crsctl query crs activeversion -f 
crsctl query crs softwareversion 
crsctl query crs releaseversion 
<GI_HOME>/OPatch/opatch lsinventory detail -oh $ORACLE_HOME 
<DB_HOME>/OPatch/opatch lsinventory detail -oh $ORACLE_HOME 

CRS-6706: Oracle Clusterware Release patch level ('2104789788') does not match Software patch level ('448950863'). Oracle Clusterware cannot be started.

CRS-6706: Oracle Clusterware Release patch level ('nnn') does not match Software patch level ('mmm') (Doc ID 1639285.1)

For 12.1 version :
Execute"<GI_HOME>/crs/install/rootcrs.sh -patch" as root user on the problematic node and the patch level should be corrected.
For 12.2
Execute"<GI_HOME>/crs/install/rootcrs.pl -prepatch"  "<GI_HOME>/crs/install/rootcrs.pl -postpatch"and as root user on the problematic node and the patch level should be corrected.

If still not working, you can try below solution:

Patching 12.2.0.1 Grid Infrastructure gives error CRS-6706: Oracle Clusterware Release Patch Level ('748994161') Does Not Match Software Patch Level (Doc ID 2348013.1)

CAUSE

This issue was caused by rootcrs.sh -prepatch not run successfully before patching.  The user ran rootcrs.sh -unlock because rootcrs.sh -prepatch failed, and then applied the patch manually. 
 

SOLUTION

 Please use the following steps to complete the patching:
1.  Run the following command as the root user to complete the patching set up behind the scenes:
#GI_HOME/bin:>  ./clscfg -localpatch

2.  Run the following command as the root user to lock the GI home:
#GI_HOME/crs/install:>  ./rootcrs.sh -lock

3.  Run the following command as the root user to start the GI:
#GI_HOME/bin:>  ./crsctl start crs

Thursday, January 10, 2019

Recreate Oracle Inventory

 ls -l  $ORACLE_HOME/oraInst.loc /var/opt/oracle/oraInst.loc

OraDB12Home1

./runInstaller -silent -attachHome ORACLE_HOME="/app/oracle/product/12.2.0.1" ORACLE_HOME_NAME="OraDB12Home1"

$ find . -name installActions*
./cfgtoollogs/oui/installActions2017-08-30_02-21-33PM.log
$ grep -i oracle_home_name ./cfgtoollogs/oui/installActions2017-08-30_02-21-33PM.log
INFO: setting ORACLE_HOME_NAME=OraDB12Home1. A default value was calculated as per oraparam.ini
INFO: Computed ORACLE_HOME_NAME = OraDB12Home1
INFO: Setting the value  for ORACLE_HOME_NAME variable
INFO: Setting variable 'ORACLE_HOME_NAME' to 'OraDB12Home1'. Received the value from the command line.
INFO: Setting the 'OracleHomeName ( ORACLE_HOME_NAME )' property to 'OraDB12Home1'. Received the value from the command line.
INFO: ORACLE_HOME_NAME is not settable, hence not setting the value