tcp_orphan_retries setting controls how many attempts will be done before a server-less port is released. By default, it is 0, to kill the Orphaned Socket, set 1 in tcp_orphan_retries.
# cd /proc/sys/net/ipv4
# echo 1 > tcp_orphan_retries
# netstat -p |grep FIN_WAIT
if all the orphaned socket is gone, restore tcp_orphan_retries
# echo 0 > tcp_orphan_retries
Thursday, August 30, 2012
Wednesday, August 29, 2012
MQSeries 7: Journal
http://publib.boulder.ibm.com/infocenter/ieduasst/v1r1m0/index.jsp?topic=/com.ibm.iea.wmq_v7/wmq/7.0.1/Details/iea_701_120_multi_instancei/player.html
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.amqzag.doc%2Ffa70162_.htm
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/topic/com.ibm.mq.amqzag.doc/fa70154_.htm
/var/mqm/mqs.ini
AllQueueManagers:
#********************************************************************#
#* The path to the qmgrs directory, below which queue manager data *#
#* is stored *#
#********************************************************************#
DefaultPrefix=/var/mqm
LogDefaults:
LogDefaultPath=/var/mqm/log
QueueManager:
Name=QMANAGER1
Prefix=/var/mqm
Directory=QMANAGER1
DataPath=/var/mqm/share/qmgrs/QMANAGER1
$ pwd
/var/mqm/share/qmgrs/QMANAGER1
$ cat qm.ini
#*******************************************************************#
#* Module Name: qm.ini *#
#* Type : WebSphere MQ queue manager configuration file *#
# Function : Define the configuration of a single queue manager *#
#* *#
#*******************************************************************#
#* Notes : *#
#* 1) This file defines the configuration of the queue manager *#
#* *#
#*******************************************************************#
ExitPath:
ExitsDefaultPath=/var/mqm/exits/
ExitsDefaultPath64=/var/mqm/exits64/
#* *#
#* *#
Log:
LogPrimaryFiles=31
LogSecondaryFiles=21
LogFilePages=32000
LogType=LINEAR
LogBufferPages=0
LogPath=/var/mqm/share/log/QMANAGER1/
LogWriteIntegrity=TripleWrite
Service:
Name=AuthorizationService
EntryPoints=13
ServiceComponent:
Service=AuthorizationService
Name=MQSeries.UNIX.auth.service
Module=/opt/mqm/lib64/amqzfu
ComponentDataSize=0
CHANNELS:
MaxChannels = 1000
TuningParameters:
FileLockHeartBeatLen=30
$
Logs
$ pwd
/var/mqm/share/qmgrs/QMANAGER1/errors
$ ls
AMQERR01.LOG AMQERR02.LOG AMQERR03.LOG
$ pwd
/var/mqm/share/qmgrs/QMANAGER1
$ ls -l
total 38
drwxrwsr-x+ 2 mqm mqm 96 Sep 9 2010 @ipcc
-rw-rw-rw- 1 mqm mqm 34 Aug 28 10:02 active
-rw-r----- 1 mqm mqm 56 Aug 28 10:02 amqalchk.fil
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 authinfo
drwxrws---+ 2 mqm mqm 1024 Jul 30 12:48 channel
drwxrws---+ 2 mqm mqm 1024 Jul 30 13:36 clntconn
drwxrws---+ 2 mqm mqm 96 Aug 28 09:57 errors
drwxrws---+ 2 mqm mqm 96 Feb 6 2012 listener
-rw-rw-rw- 1 mqm mqm 34 Aug 28 10:02 master
drwxrws---+ 2 mqm mqm 1024 Sep 9 2010 namelist
drwxrwsr-x+ 2 mqm mqm 96 Sep 9 2010 plugcomp
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 procdef
-rw-r----- 1 mqm mqm 1390 Aug 28 09:46 qm.ini
-rw-r----- 1 mqm mqm 1340 Aug 28 09:41 qm.ini.20120828
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 qmanager
-rw-r----- 1 mqm mqm 693 Aug 28 10:02 qmstatus.ini
drwxrws---+ 93 mqm mqm 4096 Aug 14 16:26 queues
drwxrwx---+ 2 mqm mqm 1024 Jul 31 08:39 scratch
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 services
drwxrwsr-x+ 2 mqm mqm 1024 Jul 18 15:46 ssl
-rw-rw-rw- 1 mqm mqm 34 Aug 28 10:03 standby
drwxrwsr-x+ 2 mqm mqm 96 Aug 28 10:03 startprm
drwxrws---+ 2 mqm mqm 1024 Sep 9 2010 topic
$ cat qmstatus.ini
AuthorityData:
Creator=mqm
QueueManagerStatus:
CurrentStatus=Running
PermitStandby=Yes
PermitFailover=Yes
PlatformSignature=8195
PlatformString=SunOS 5.10
ManagedSets:
QMANAGER1/@ipcc.IPCCPSet=41097
QMANAGER1/@ipcc.IPCCP64S=914176
QMANAGER1/@qmgr.ZDMPipe=7677
QMANAGER1/@qmgr.OAMshmem=33134
QMANAGER1/@qmgr.OAMPipe=11704
QMANAGER1/@qmgr.KernelSet=5591856
QMANAGER1/@qmgr.TopicSet=663808
QMANAGER1/@qmgr.SelectorSet=560
QMANAGER1/@qmgr.ObjectCatalogue=1979280
QMANAGER1/@qmgr.QueueSessionSegment=3526888
QMANAGER1/@qmgr.TransactionSessionSegment=194004
GhostPools:
SYSTEM.DEFAULT.MODEL.QUEUE=3
SYSTEM.MQEXPLORER.REPLY.MODEL=3
$ dspmq -x -m QMANAGER1
QMNAME(QMANAGER1) STATUS(Running)
INSTANCE(cipgwuatmq01) MODE(Active)
INSTANCE(cipgwuatmq02) MODE(Standby)
$endmqm
root@artmqserver1:/root/MQ/bin:# cat restart-QMGR.sh
#!/bin/sh
if [ $# != 2 ]; then
echo "Usage: $0 <QMGR name> <port>"
exit 1
fi
QMGR=$1
PORT=$2
endmqm -p $QMGR
endmqlsr -m $QMGR
strmqm $QMGR
nohup runmqlsr -t tcp -p ${PORT} -m ${QMGR} >/dev/null 2>&1 &
amqiclen -x -m QMGR
runmqsc QMGR
--> display channel(*)
end to quit runmqsc
crtmqm -lc -lp 5 -ls 3 -u DEAD.LETTER.QUEUE -ld /var/mqm/share/log -md /var/mqm/share/qmgrs W
RAP.DEV.QMGR01
addmqinf -s QueueManager -v Name=WRAP.DEV.QMGR01 -v Directory=WRAP\!DEV\!QMGR01 -v Prefix=/var/
mqm -v DataPath=/var/mqm/share/qmgrs/WRAP\!DEV\!QMGR01
strmqm -x WRAP.DEV.QMGR01
cat /usr/bin/runmqm
cat /var/run/.run_mq
gsk7cmd -cert -list -db key.kdb -pw passw0rd
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust yes
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust true
gsk7cmd
gsk7cmd -cert -add
vi /tmp/ca.pem
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>-trust true
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>
vi /tmp/ca.pem
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>
gsk7cmd -cert -list -db key.kdb -pw password
gsk7cmd -cert -list -db key.kdb -pw passw0rd
gsk7cmd -cert -details -label ibmwebspheremqQMANAGER1 -db key.kdb -pw passw0rd
endmqm -i QMANAGER1
endmqm -i QMANAGER1
export JAVA_HOME=/opt/mqm/ssl
cd /var/mqm/share/qmgrs/
keytool
keytool -list -keystore key.kdb -storpass passw0rd
keytool -list -keystore key.kdb -storepass passw0rd
keytool -list -keystore key.kdb -storepass <your password>-stoetype CMS
keytool -list -keystore key.kdb -storepass <your password>-storetype CMS
keytool -list -keystore key.kdb -storepass <your password>-storetype JMS
exit
/etc/inittab
mq0:3:respawn:/usr/bin/runmqm > /dev/null 2>&1 #Autostart MQ Multi-Instance Monitor
#mq2:3:respawn:/usr/bin/runmqlsr -t tcp -p 1415 -m QMANAGER1 >/dev/null 2>&1 #Autostart MQ Listner
below is NFS Veritas Cluster Configuration for MQ multiple instances fail over.
group MQnfsSG (
SystemList = { cipgwuat2ap1 = 0, cipgwuat2ap2 = 1 }
AutoStartList = { cipgwuat2ap1 }
)
Application NFSApp (
StartProgram = "/apps/cluster/NFSOnline"
StopProgram = "/apps/cluster/NFSOffline"
MonitorProgram = "/apps/cluster/NFSmonitor"
)
IP MQnfsIP (
Device = ce0
Address = "10.115.199.42"
NetMask = "255.255.255.0"
)
requires group CIPG_DATA online local firm
MQnfsIP requires NFSApp
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.amqzag.doc%2Ffa70162_.htm
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/topic/com.ibm.mq.amqzag.doc/fa70154_.htm
/var/mqm/mqs.ini
AllQueueManagers:
#********************************************************************#
#* The path to the qmgrs directory, below which queue manager data *#
#* is stored *#
#********************************************************************#
DefaultPrefix=/var/mqm
LogDefaults:
LogDefaultPath=/var/mqm/log
QueueManager:
Name=QMANAGER1
Prefix=/var/mqm
Directory=QMANAGER1
DataPath=/var/mqm/share/qmgrs/QMANAGER1
$ pwd
/var/mqm/share/qmgrs/QMANAGER1
$ cat qm.ini
#*******************************************************************#
#* Module Name: qm.ini *#
#* Type : WebSphere MQ queue manager configuration file *#
# Function : Define the configuration of a single queue manager *#
#* *#
#*******************************************************************#
#* Notes : *#
#* 1) This file defines the configuration of the queue manager *#
#* *#
#*******************************************************************#
ExitPath:
ExitsDefaultPath=/var/mqm/exits/
ExitsDefaultPath64=/var/mqm/exits64/
#* *#
#* *#
Log:
LogPrimaryFiles=31
LogSecondaryFiles=21
LogFilePages=32000
LogType=LINEAR
LogBufferPages=0
LogPath=/var/mqm/share/log/QMANAGER1/
LogWriteIntegrity=TripleWrite
Service:
Name=AuthorizationService
EntryPoints=13
ServiceComponent:
Service=AuthorizationService
Name=MQSeries.UNIX.auth.service
Module=/opt/mqm/lib64/amqzfu
ComponentDataSize=0
CHANNELS:
MaxChannels = 1000
TuningParameters:
FileLockHeartBeatLen=30
$
Logs
$ pwd
/var/mqm/share/qmgrs/QMANAGER1/errors
$ ls
AMQERR01.LOG AMQERR02.LOG AMQERR03.LOG
$ pwd
/var/mqm/share/qmgrs/QMANAGER1
$ ls -l
total 38
drwxrwsr-x+ 2 mqm mqm 96 Sep 9 2010 @ipcc
-rw-rw-rw- 1 mqm mqm 34 Aug 28 10:02 active
-rw-r----- 1 mqm mqm 56 Aug 28 10:02 amqalchk.fil
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 authinfo
drwxrws---+ 2 mqm mqm 1024 Jul 30 12:48 channel
drwxrws---+ 2 mqm mqm 1024 Jul 30 13:36 clntconn
drwxrws---+ 2 mqm mqm 96 Aug 28 09:57 errors
drwxrws---+ 2 mqm mqm 96 Feb 6 2012 listener
-rw-rw-rw- 1 mqm mqm 34 Aug 28 10:02 master
drwxrws---+ 2 mqm mqm 1024 Sep 9 2010 namelist
drwxrwsr-x+ 2 mqm mqm 96 Sep 9 2010 plugcomp
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 procdef
-rw-r----- 1 mqm mqm 1390 Aug 28 09:46 qm.ini
-rw-r----- 1 mqm mqm 1340 Aug 28 09:41 qm.ini.20120828
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 qmanager
-rw-r----- 1 mqm mqm 693 Aug 28 10:02 qmstatus.ini
drwxrws---+ 93 mqm mqm 4096 Aug 14 16:26 queues
drwxrwx---+ 2 mqm mqm 1024 Jul 31 08:39 scratch
drwxrws---+ 2 mqm mqm 96 Sep 9 2010 services
drwxrwsr-x+ 2 mqm mqm 1024 Jul 18 15:46 ssl
-rw-rw-rw- 1 mqm mqm 34 Aug 28 10:03 standby
drwxrwsr-x+ 2 mqm mqm 96 Aug 28 10:03 startprm
drwxrws---+ 2 mqm mqm 1024 Sep 9 2010 topic
$ cat qmstatus.ini
AuthorityData:
Creator=mqm
QueueManagerStatus:
CurrentStatus=Running
PermitStandby=Yes
PermitFailover=Yes
PlatformSignature=8195
PlatformString=SunOS 5.10
ManagedSets:
QMANAGER1/@ipcc.IPCCPSet=41097
QMANAGER1/@ipcc.IPCCP64S=914176
QMANAGER1/@qmgr.ZDMPipe=7677
QMANAGER1/@qmgr.OAMshmem=33134
QMANAGER1/@qmgr.OAMPipe=11704
QMANAGER1/@qmgr.KernelSet=5591856
QMANAGER1/@qmgr.TopicSet=663808
QMANAGER1/@qmgr.SelectorSet=560
QMANAGER1/@qmgr.ObjectCatalogue=1979280
QMANAGER1/@qmgr.QueueSessionSegment=3526888
QMANAGER1/@qmgr.TransactionSessionSegment=194004
GhostPools:
SYSTEM.DEFAULT.MODEL.QUEUE=3
SYSTEM.MQEXPLORER.REPLY.MODEL=3
$ dspmq -x -m QMANAGER1
QMNAME(QMANAGER1) STATUS(Running)
INSTANCE(cipgwuatmq01) MODE(Active)
INSTANCE(cipgwuatmq02) MODE(Standby)
$endmqm
root@artmqserver1:/root/MQ/bin:# cat restart-QMGR.sh
#!/bin/sh
if [ $# != 2 ]; then
echo "Usage: $0 <QMGR name> <port>"
exit 1
fi
QMGR=$1
PORT=$2
endmqm -p $QMGR
endmqlsr -m $QMGR
strmqm $QMGR
nohup runmqlsr -t tcp -p ${PORT} -m ${QMGR} >/dev/null 2>&1 &
amqiclen -x -m QMGR
runmqsc QMGR
--> display channel(*)
end to quit runmqsc
crtmqm -lc -lp 5 -ls 3 -u DEAD.LETTER.QUEUE -ld /var/mqm/share/log -md /var/mqm/share/qmgrs W
RAP.DEV.QMGR01
addmqinf -s QueueManager -v Name=WRAP.DEV.QMGR01 -v Directory=WRAP\!DEV\!QMGR01 -v Prefix=/var/
mqm -v DataPath=/var/mqm/share/qmgrs/WRAP\!DEV\!QMGR01
strmqm -x WRAP.DEV.QMGR01
cat /usr/bin/runmqm
cat /var/run/.run_mq
gsk7cmd -cert -list -db key.kdb -pw passw0rd
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust yes
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust true
gsk7cmd
gsk7cmd -cert -add
vi /tmp/ca.pem
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>-trust true
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>
vi /tmp/ca.pem
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>
gsk7cmd -cert -list -db key.kdb -pw password
gsk7cmd -cert -list -db key.kdb -pw passw0rd
gsk7cmd -cert -details -label ibmwebspheremqQMANAGER1 -db key.kdb -pw passw0rd
endmqm -i QMANAGER1
endmqm -i QMANAGER1
export JAVA_HOME=/opt/mqm/ssl
cd /var/mqm/share/qmgrs/
keytool
keytool -list -keystore key.kdb -storpass passw0rd
keytool -list -keystore key.kdb -storepass passw0rd
keytool -list -keystore key.kdb -storepass <your password>-stoetype CMS
keytool -list -keystore key.kdb -storepass <your password>-storetype CMS
keytool -list -keystore key.kdb -storepass <your password>-storetype JMS
exit
/etc/inittab
mq0:3:respawn:/usr/bin/runmqm > /dev/null 2>&1 #Autostart MQ Multi-Instance Monitor
#mq2:3:respawn:/usr/bin/runmqlsr -t tcp -p 1415 -m QMANAGER1 >/dev/null 2>&1 #Autostart MQ Listner
below is NFS Veritas Cluster Configuration for MQ multiple instances fail over.
group MQnfsSG (
SystemList = { cipgwuat2ap1 = 0, cipgwuat2ap2 = 1 }
AutoStartList = { cipgwuat2ap1 }
)
Application NFSApp (
StartProgram = "/apps/cluster/NFSOnline"
StopProgram = "/apps/cluster/NFSOffline"
MonitorProgram = "/apps/cluster/NFSmonitor"
)
IP MQnfsIP (
Device = ce0
Address = "10.115.199.42"
NetMask = "255.255.255.0"
)
requires group CIPG_DATA online local firm
MQnfsIP requires NFSApp
Solaris: starting/controlling services
SMF: starting/controlling services
In most Unix systems, startup scripts in /etc/rc3.d, etc, are used to start and stop services. Solaris 10 uses a different approach. There are two advantages to the Solaris 10 method:
The system can come up faster, because startup of various systems can be done in parallel.
The system knows more about what is going on. It can monitor processes and restart them.
Services are managed by svcadm. The most common commands are svcadm enable SERVICE
svcadm disable SERVICE Note that enable and disable are persistent. That is, if you enable a service it will be brought back up after a reboot. Similarly with disabling. If you want to stop a service but have it come back up after the next reboot, use "svcadm -t disable SERVICE". That stops it temporarily.
To look at services, two common commands are svcs {lists summary of all}
svcs -l SERVICE {details on one service}
Solaris 10 still pays attention to /etc/rcN.d, but services defined there are "legacy", and can't be fully monitored and controlled. To define a service, you create an XML file that specifies dependencies, and methods to start and stop it. Then you do "svccfg import FOO.xml". Normally the XML file is written to create an instance but not enable it. So if the import works, you would need to do "svcadm enable SERVICE" to start it. A good way to start writing the XML file is to look at existing ones. They are in subdirectories of /var/svc/manifest. Sun suggests system/utmp.xml as a simple example. Since many of your services may be network services, take a look at what is in network. In network, there are two types of services, some that are standard daemons (e.g. http_apache2) and some there are run by inet (e.g. telnet). If you add services, you probably want to put your .xml files in /var/svc/manifest and your scripts in /lib/svc/method. That way anyone who needs to work with the system can find them, just as they now know to look in /etc/init.d for all startup scripts. However I suggest making those symbolic links to files that are actually in /usr/local/svc/manifest and /usr/local/svc/method. That way you won't lose your information in a system reinstallation. I suggest two pages in Sun's BIGADMIN site: Predictive Self-Healing. This is the best brief introduction to the SMF system.
Configuring JBoss to use with SFM Step by step instructions to configure a typical daemon. Note there is one possible error in the XML. One dependency is given a name of "ssh_multi-user-server". I had problems until I changed the name. I suggest SERVICE_multi-user-server, where SERVICE is your service name.
Most of the XML files refer to scripts to do start, stop and restart. The Sun scripts all reside in /lib/svc/method. It's worth looking at some of the examples. There are two standard approaches: a script that just starts the service. You then use :kill in the XML file to stop it. This causes all processes started by the start script to be killed. Or you have a script that looks a lot like a traditional init.d script, which is called as "SCRIPT start" and "SCRIPT stop". Use that if you need to do something beyond just killing the process. For a quick conversion you can use the example above, and call the init.d script with start and stop. However you may want to change the script slightly: Be careful about return values. You should return 0 if the action is taken. (With stop you should return 0 even if the process was already stopped.) Otherwise you'll need to return a value defined by SMF.
When starting and stopping, don't return until the process is running and ready to respond to user requests, or until it's really dead. If you need to wait, make sure you use a timeout.
The second item is critical if other processes depend upon this one, since they'll go on to the dependencies as soon as the start process returns. If there are no dependencies, you can get away with a simple init.d script, as long as it returns 0.
Note that if all processes started by a service die, the system will try to restart the service by doing a stop and then a start. You can also define a "refresh" action, which prods a service if a configuration file changes.
The system knows more about what is going on. It can monitor processes and restart them.
Services are managed by svcadm. The most common commands are svcadm enable SERVICE
svcadm disable SERVICE Note that enable and disable are persistent. That is, if you enable a service it will be brought back up after a reboot. Similarly with disabling. If you want to stop a service but have it come back up after the next reboot, use "svcadm -t disable SERVICE". That stops it temporarily.
To look at services, two common commands are svcs {lists summary of all}
svcs -l SERVICE {details on one service}
Solaris 10 still pays attention to /etc/rcN.d, but services defined there are "legacy", and can't be fully monitored and controlled. To define a service, you create an XML file that specifies dependencies, and methods to start and stop it. Then you do "svccfg import FOO.xml". Normally the XML file is written to create an instance but not enable it. So if the import works, you would need to do "svcadm enable SERVICE" to start it. A good way to start writing the XML file is to look at existing ones. They are in subdirectories of /var/svc/manifest. Sun suggests system/utmp.xml as a simple example. Since many of your services may be network services, take a look at what is in network. In network, there are two types of services, some that are standard daemons (e.g. http_apache2) and some there are run by inet (e.g. telnet). If you add services, you probably want to put your .xml files in /var/svc/manifest and your scripts in /lib/svc/method. That way anyone who needs to work with the system can find them, just as they now know to look in /etc/init.d for all startup scripts. However I suggest making those symbolic links to files that are actually in /usr/local/svc/manifest and /usr/local/svc/method. That way you won't lose your information in a system reinstallation. I suggest two pages in Sun's BIGADMIN site: Predictive Self-Healing. This is the best brief introduction to the SMF system.
Configuring JBoss to use with SFM Step by step instructions to configure a typical daemon. Note there is one possible error in the XML. One dependency is given a name of "ssh_multi-user-server". I had problems until I changed the name. I suggest SERVICE_multi-user-server, where SERVICE is your service name.
Most of the XML files refer to scripts to do start, stop and restart. The Sun scripts all reside in /lib/svc/method. It's worth looking at some of the examples. There are two standard approaches: a script that just starts the service. You then use :kill in the XML file to stop it. This causes all processes started by the start script to be killed. Or you have a script that looks a lot like a traditional init.d script, which is called as "SCRIPT start" and "SCRIPT stop". Use that if you need to do something beyond just killing the process. For a quick conversion you can use the example above, and call the init.d script with start and stop. However you may want to change the script slightly: Be careful about return values. You should return 0 if the action is taken. (With stop you should return 0 even if the process was already stopped.) Otherwise you'll need to return a value defined by SMF.
When starting and stopping, don't return until the process is running and ready to respond to user requests, or until it's really dead. If you need to wait, make sure you use a timeout.
The second item is critical if other processes depend upon this one, since they'll go on to the dependencies as soon as the start process returns. If there are no dependencies, you can get away with a simple init.d script, as long as it returns 0.
Note that if all processes started by a service die, the system will try to restart the service by doing a stop and then a start. You can also define a "refresh" action, which prods a service if a configuration file changes.
Thursday, August 23, 2012
FormLogin is configured for web application isclite but SSO is not enabled in the global security settings
FormLoginExte E SECJ0154E: SSO Configuration error. FormLogin is configured for web application isclite but SSO is not enabled in the global security settings. SSO must be enabled to use FormLogin.
check the security.xml, and found
<singleSignon xmi:id="SingleSignon_1" requiresSSL="false" domainName="" enabled="false"/>
change enabled from false to true, then restart Deployment manager.
<singleSignon xmi:id="SingleSignon_1" requiresSSL="false" domainName="" enabled="true"/>
WebSphere Jython Programing
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.express.doc/info/exp/ae/cxml_jython.html
Below script add destination to bus "ServiceBus"
=================================================================
import sys
for dest in ["ActionServiceBusQueue", "AuditServiceBusQueue", "EngineErrorServiceQueue", "DaemonErrorServiceBusQueue", "ActionErrorServiceBusQueue", "AuditErrorServiceBusQueue", "LGBusQueue"]:
print dest
AdminTask.createSIBDestination('[-bus ServiceBus -name ' + dest + ' -type Queue -reliability ASSURED_PERSISTENT -description ' + dest + ' -node LTSLUAT2AppNode1 -server LTSLUAT2AppServer1 ]')
AdminConfig.save()
# End for
Jython 2-D Array
=================================================================
Below script add destination to bus "ServiceBus"
=================================================================
import sys
for dest in ["ActionServiceBusQueue", "AuditServiceBusQueue", "EngineErrorServiceQueue", "DaemonErrorServiceBusQueue", "ActionErrorServiceBusQueue", "AuditErrorServiceBusQueue", "LGBusQueue"]:
print dest
AdminTask.createSIBDestination('[-bus ServiceBus -name ' + dest + ' -type Queue -reliability ASSURED_PERSISTENT -description ' + dest + ' -node LTSLUAT2AppNode1 -server LTSLUAT2AppServer1 ]')
AdminConfig.save()
# End for
Jython 2-D Array
=================================================================
from java.lang.reflect import Arrayimport javarows = 3cols = 3str2d = java.lang.reflect.Array.newInstance(java.lang.String,[rows, cols])str2d[0][0] = "python"str2d[1][0] = "jython"str2d[2][0] = "java"str2d[0][1] = "syntax "str2d[1][1] = "strength"str2d[2][1] = "libraries"str2d[0][2] = "unclutter"str2d[1][2] = "combine"str2d[2][2] = "graphics"print str2dprint "printing multidimensional array"for i in range(len(str2d)): for j in range(len(str2d[i])): print str2d[i][j]+"\t", printprint
Source -> Destination
==================================================================
import sys
for dest in ["DaemonQueue DaemonServiceBusQueue", "ActionQueue ActionServiceBusQueue", "AuditQueue AuditServiceBusQueue", "EngineErrorQueue EngineErrorServiceQueue", "DaemonErrorQueue DaemonErrorServiceBusQueue", "AuditErrorQueue AuditErrorServiceBusQueue", "ActionErrorQueue ActionErrorServiceBusQueue", "LG_Queue1 LGBusQueue"]:
print dest
entry=dest.split(' ')
s=entry[0]
d=entry[1]
print "========"
print s; print "->"; print d
Check Server Status
====================================================================
Usage: /opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -lang jython -profile serverStatus.py -c "serverStatus()"
serverStatus.py
import re;
def serverStatus() :
pat = re.compile(r'^(w+)(cells/(w+)/nodes/(w+)/servers/1.*)$');
info = [];
maxLen = [ 0 ] * 3;
servers = AdminConfig.list('Server').splitlines();
#print servers;
for server in servers :
#print server;
oName = AdminConfig.getObjectName(server);
#print oName;
if oName != '' :
status = 'running';
else :
status = 'stopped';
#print status
mObj = pat.match(server);
if mObj :
(sName, cName, nName) = mObj.groups();
info.append((sName, cName, nName, status));
for i in range(3) :
L = len(mObj.group(i + 1));
if L > maxLen[ i ] : maxLen[ i ] = L;
print '%(sName)s | %(cName)s | %(nName)s | %(status)s' % locals();
else:
print "no Matching."
print dest
entry=dest.split(' ')
s=entry[0]
d=entry[1]
print "========"
print s; print "->"; print d
Check Server Status
====================================================================
Usage: /opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -lang jython -profile serverStatus.py -c "serverStatus()"
serverStatus.py
import re;
def serverStatus() :
pat = re.compile(r'^(w+)(cells/(w+)/nodes/(w+)/servers/1.*)$');
info = [];
maxLen = [ 0 ] * 3;
servers = AdminConfig.list('Server').splitlines();
#print servers;
for server in servers :
#print server;
oName = AdminConfig.getObjectName(server);
#print oName;
if oName != '' :
status = 'running';
else :
status = 'stopped';
#print status
mObj = pat.match(server);
if mObj :
(sName, cName, nName) = mObj.groups();
info.append((sName, cName, nName, status));
for i in range(3) :
L = len(mObj.group(i + 1));
if L > maxLen[ i ] : maxLen[ i ] = L;
print '%(sName)s | %(cName)s | %(nName)s | %(status)s' % locals();
else:
print "no Matching."
Tuesday, August 21, 2012
WAS ND 8 - Installation
./installc -acceptLicense -silent -input goweekend_install.xml
./IBMIM -skipInstall /var/tmp/imRegistry -record /var/tmp/aniu.xml
./imcl -input /var/tmp/aniu.xml -log /var/tmp/aniu.log
./IBMIM -skipInstall /var/tmp/imRegistry -record /var/tmp/aniu.xml
./imcl -input /var/tmp/aniu.xml -log /var/tmp/aniu.log
Linux VM Provisioning
cat ~/.ssh/id_dsa.pub
vi ~/.ssh/authorized_keys
echo pass\&w0rd | passwd root --stdin
ssh-keygen -t rsa -f ~/.ssh/id_rsa -N "" -b 2048
lvresize -L +2G /dev/mapper/rootvg-varlv; resize2fs /dev/mapper/rootvg-varlv
lvresize -L +2G /dev/mapper/rootvg-tmplv; resize2fs /dev/mapper/rootvg-tmplv
fdisk /dev/sdc < /home/aniu01/LinuxProvisioning/newdisk.sh
n
p
1
t
8e
w
echo - - - > /sys/class/scsi_host/host0/scan
fdisk -l
fdisk /dev/sdb
pvcreate /dev/sdb1
vgcreate APPVG /dev/sdb1
lvcreate -l 100%FREE -n APPLV APPVG
mkfs.ext3 /dev/APPVG/APPLV
fdisk /dev/sdd
fdisk -l
pvcreate /dev/sdd1
vgcreate MIDWAREVG /dev/sdd1
lvcreate -l 100%FREE -n MIDWARELV MIDWAREVG
mkfs.ext3 /dev/MIDWAREVG/MIDWARELV
mkfs -t ext3 /dev/MIDWAREVG/MIDWARELV
fdisk /dev/sdc
fdisk -l
pvcreate /dev/sdc1
vgcreate APPLOGVG /dev/sdc1
lvcreate -l 100%FREE -n APPLOGLV APPLOGVG
mkfs -t ext3 /dev/APPLOGVG/APPLOGLV
fdisk /dev/sdc
fdisk -l
pvcreate /dev/sdc1
vgcreate DATAVG /dev/sdc1
lvcreate -l 100%FREE -n DATALV DATAVG
mkfs -t ext3 /dev/DATAVG/DATALV
chkconfig portmap on;chkconfig ypbind on;chkconfig autofs on
nisdomainname ts.bo.com; domainname ts.bo.com
echo NISDOMAIN=goweekend.ca >> /etc/sysconfig/network
cd /etc; mv auto.home auto.home.orig
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluacog01:/etc; cd ..
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluaapp02:/etc; cd ..
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluaweb01:/etc; cd ..
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluaweb02:/etc; cd ..
systool -c fc_host -v
lspci
dmidecode -t system
============================================================================================
Oracle requirements
groupadd dba; useradd -d /export/home/oracle -g dba oracle; echo dba4client |passwd oracle --stdin
echo dba4client | passwd oracle --stdin
cd /opt; mkdir oracle oraInventory; chown -R oracle:dba oracle oraInventory
yum install binutils* compat-libstdc++* elfutils-libelf* gcc-* glibc* ksh* libaio* libgcc* libgomp* libstdc* make* sysstat* unixODBC-2.2.11 unixODBC-devel-2.2.11 -y
echo "kernel.shmall = 4294967296
kernel.shmmax = 68719476736
kernel.shmmni = 4096
kernel.sem = 250 32000 100 1024
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range =9000 65500
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
" >> /etc/sysctl.conf
sysctl -p
unzip /net/todnfs03/vol/archive/sw_archive/Oracle_Client/linux.x64_11gR2_client.zip
./runInstaller
runInstaller -silent -responseFile /tmp/oracle-admin.rsp
32 bits client
tar -xf /net/todnfs03/vol/archive/sw_archive/Oracle_Client/Oracle_Client_Linux_32bits_11g202.tar
./runInstaller -silent -nowelcome -responsefile /home/aniu01/cmsf/oracle_client.rsp
Oracle base: /opt/oracle
Software Locatioin: /opt/oracle/product/11.2.0.2/client
============================================================================================
sendmail
vi /etc/mail/sendmail.cf
# "Smart" relay host (may be null)
DSmailhost.goweekend.ca
echo "Let me know if you have got this email" |mailx -s "test from `hostname`" arthur.niu@gmail.com
/opt/quest/bin/vastool -u adm-ytan03 join -c "OU=CTD CM & RMG UNIX,OU=UnixServers,OU=Servers,DC=adroot,DC=goweekend,DC=ca" -n `hostname` office.goweekend.ca
Friday, August 17, 2012
Linux Cached Memory
In Linux, reading from a disk is very slow compared to accessing real memory. In addition, it is common to read the same part of a disk several times during relatively short periods of time. For example, one might first read an e-mail message, then read the letter into an editor when replying to it, then make the mail program read it again when copying it to a folder. Or, consider how often the command ls might be run on a system with many users. By reading the information from disk only once and then keeping it in memory until no longer needed, one can speed up all but the first read. This is called disk buffering, and the memory used for the purpose is called the buffer cache.
UnlikeWindows other operating systems, Linux administers memory the smartest way it can.
Since unused memory is next to worthless, the filesystem takes whatever memory is left and caches it in order to speed up disk access. When the cache fills up, the data that has been unused for the longest time is discarded and the memory thus freed is used for the new data.
Whenever an application needs memory, the kernel makes the cache smaller; You do not need to do anything to make use of the cache, it happens completely automatically.
Freeing memory buffer does not make your programs faster… Actually it makes disk access slower.
BUT if for some reason (kernel debugging for example) you want to force the buffer to be freed, you need to set the drop_caches value to 1:
Unlike
Since unused memory is next to worthless, the filesystem takes whatever memory is left and caches it in order to speed up disk access. When the cache fills up, the data that has been unused for the longest time is discarded and the memory thus freed is used for the new data.
Whenever an application needs memory, the kernel makes the cache smaller; You do not need to do anything to make use of the cache, it happens completely automatically.
Freeing memory buffer does not make your programs faster… Actually it makes disk access slower.
BUT if for some reason (kernel debugging for example) you want to force the buffer to be freed, you need to set the drop_caches value to 1:
$ echo 1 > /proc/sys/vm/drop_caches
Thursday, August 16, 2012
Solaris: NIS installation and configuration
Solaris: NIS installation and configuration
(This article has been updated from the original, which focused on Solaris 8 only, to include Solaris 10-specific entries. Where the commands or entries for Solaris 8 and Solaris 10 differ, they are written down in purple for Solaris 8 and green for Solaris 10. In addition, I have added an extra note about changing the NIS Makefile in the event that you're not going to use group passwords.)
This is a step-by-step account of the method I used to install and configure a NIS master and slaves on servers running Solaris 8 (and more recently, Solaris 10). The steps detailed for Solaris 8 should work fine on earlier versions of Solaris, but as I have not explicitly tested other versions (except as clients) you may encounter issues. The clients used with this setup ranged from Solaris 7 to Solaris 10. The installation was in a medium-sized Solaris-only farm (100+ hosts).
Configuring NIS on Solaris is not quite as straightforward as it is on other OSes (such as some Linux distros). This didn't really surprise me, even though NIS is Sun's product. What this does allow is a more tailored end product.
There are three points I'd like to emphasise concerning this article:
This is a step-by-step account of the method I used to install and configure a NIS master and slaves on servers running Solaris 8 (and more recently, Solaris 10). The steps detailed for Solaris 8 should work fine on earlier versions of Solaris, but as I have not explicitly tested other versions (except as clients) you may encounter issues. The clients used with this setup ranged from Solaris 7 to Solaris 10. The installation was in a medium-sized Solaris-only farm (100+ hosts).
Configuring NIS on Solaris is not quite as straightforward as it is on other OSes (such as some Linux distros). This didn't really surprise me, even though NIS is Sun's product. What this does allow is a more tailored end product.
There are three points I'd like to emphasise concerning this article:
- This article is not an definitive how-to; there is more than one way to implement NIS. This way works, it's relatively straightforward, and is more secure than a default NIS installation.
- This article is not an endorsement of NIS over other naming systems. My recommendation to the client was to use LDAP, but NIS had been used before, they were more or less happy with it *, and it did what they wanted it to. Having said that NIS is still used on many sites, it's versatile, it's easy to set up and maintain, and it can be made more secure without too much extra effort.
- This article describes the set up of NIS only. Administration is another matter altogether.
- master server: system files and NIS maps
- all nis servers and clients: /etc/nsswitch.conf
- master server: passwd, passwd.adjunct and shadow
- set up and start the NIS master server
- set up and start a NIS slave server
- set up and start a NIS client host
- useful links
master server: system files and NIS maps
The default location for NIS maps is under/etc
. I've used the existing system files apart from the passwd and shadow maps. These two need to be seperated from the master server host system files to prevent root and other system account entries in the NIS passwd map. Some files currently under /etc/security
may also need to be copied to /etc
. The following files need to be created (use touch), or copied from other locations if they do not exist. Note that most will exist.
/etc/auto_home | should already exist |
/etc/auto_master | should already exist |
/etc/bootparams | create if required |
/etc/ethers | create if required |
/etc/group | should already exist |
/etc/hosts | should already exist |
/etc/inet/ipnodes | should already exist |
/etc/mail/aliases | should already exist |
/etc/netgroup | create if required |
/etc/netid | create if required |
/etc/netmasks | should already exist |
/etc/networks | should already exist |
/etc/passwd | should already exist; will copy to a different location and edit |
/etc/protocols | should already exist |
/etc/publickey | should already exist |
/etc/rpc | should already exist |
/etc/services | should already exist |
/etc/shadow | should already exist; will copy to a different location and edit |
/etc/timezone | echo "GB yourdomain" > /etc/timezone where GB is your timezone and "yourdomain" is the name of your NIS domain |
/etc/auth_attr | copy from /etc/security/auth_attr if required |
/etc/exec_attr | copy from /etc/security/exec_attr if required |
/etc/prof_attr | copy from /etc/security/prof_attr if required |
/etc/audit_user | copy from /etc/security/audit_user if required |
all nis servers and clients: /etc/nsswitch.conf
Later on, we will be copying /etc/nsswitch.nis
to /etc/nsswitch.conf
. The existing /etc/nsswitch.nis
seems unusual to my eye, and I suggest the following changes. Whether or not you apply these changes depends on how your network is set up. # cp /etc/nsswitch.nis /etc/nsswitch.nis.orig
Edit
/etc/nsswitch.nis
: Change
hosts: nis [NOTFOUND=return] files
to read
hosts: files nis dns
and
automount: files nis
to read
automount: nis files
master server: passwd
, passwd.adjunct
and shadow
First we will create a seperate directory (/etc/nis_etc
) for the NIS passwd
and shadow
maps, plus another one (/etc/nis_etc/security
) for the passwd.adjunct
file. # mkdir -p /etc/nis_etc/security
# cd /etc
# chmod -R 700 nis_etc
# cp passwd nis_etc/
# cp shadow nis_etc/
# cp passwd nis_etc/security/passwd.adjunct
# cd nis_etc
Now we need to edit
/etc/nis_etc/passwd
: - Remove the following entries:
(Solaris 8 in purple, Solaris 10 in green.)- root
- daemon
- bin
- sys
- adm
- lp
- uucp
- nuucp
- listen
- nobody
- noaccess
- nobody4
- sysadmin
- sshd
- root
- daemon
- bin
- sys
- adm
- lp
- uucp
- nuucp
- smmsp
- listen
- gdm
- webservd
- nobody
- noaccess
- nobody4
- Edit each user entry, removing the password placeholder ("x" in the second field) and replacing it with two hashes and the username. E.g:
bloggf01:x:1001:10:Fred Bloggs:/export/home/bloggf01:/bin/ksh
becomesbloggf01:##bloggf01:1001:10:Fred Bloggs:/export/home/bloggf01:/bin/ksh
Doing this for a passwd file with 1000 entries can take a little while, so the following strategy may help:
# cd /etc/nis_etc
# mv passwd passwd.orig
# nawk -F ":" '{ printf "%s:##%s:%s:%s:%s:%s:%s\n", $1, $1, $3, $4, $5, $6, $7 }' passwd.orig > passwdDon't deletepasswd.orig
just yet! - The passwd.adjunct file can be generated using a similar awk script:
# cd /etc/nis_etc
# nawk -F ":" '{ printf "%s:%s:::::\n", $1, $2 }' passwd.orig > security/passwd.adjunctYou can deletepasswd.orig
now if you want. - Add the following two lines to the top of your NIS
passwd
map:AUpwdauthd:##AUpwdauthd:10:10::/var/tmp:/bin/true
AUyppasswdd:##AUyppasswdd:11:10::/var/tmp:/bin/true - And add the corresponding lines to your NIS
passwd.adjunct
map:AUpwdauthd:*:::::
AUyppasswdd:*:::: - Edit the NIS Makefile to reflect the new locations of the NIS
passwd
andshadow
maps:
# cd /var/yp
# cp Makefile Makefile.origEdit/var/yp/Makefile
so that the PWDIR variable is changed to/etc/nis_etc
. - If you're not using group passwords, (and I'm not in this instance), then you could also edit the NIS Makefile to prevent worrying (but harmless) error messages appearing each time you run the
make
command:
Change:c2secure: -@if [ -f $(PWDIR)/security/passwd.adjunct ]; then \ if [ ! $(NOPUSH) ]; then $(MAKE) $(MFLAGS) -k \ passwd.adjunct.time group.adjunct.time; \ else $(MAKE) $(MFLAGS) -k NOPUSH=$(NOPUSH) \ passwd.adjunct.time group.adjunct.time; \ fi; \ fi
toc2secure: -@if [ -f $(PWDIR)/security/passwd.adjunct ]; then \ if [ ! $(NOPUSH) ]; then $(MAKE) $(MFLAGS) -k \ passwd.adjunct.time; \ else $(MAKE) $(MFLAGS) -k NOPUSH=$(NOPUSH) \ passwd.adjunct.time; \ fi; \ fi
/etc/nis_etc/shadow
: - Remove the same entries as you did with
/etc/nis_etc/passwd
:- root
- daemon
- bin
- sys
- adm
- lp
- uucp
- nuucp
- listen
- nobody
- noaccess
- nobody4
- sysadmin
- sshd
- root
- daemon
- bin
- sys
- adm
- lp
- uucp
- nuucp
- smmsp
- listen
- gdm
- webservd
- nobody
- noaccess
- nobody4
- If you want to further tidy up
/etc/nis_etc/shadow
, you can. NIS only uses the first two fields, which are the user name and the encoded password.
set up and start the NIS master server
Now we need to set the domainname, make sure the correct nsswitch.conf file is in place, and start the NIS master server processes.# domainname yourdomain
# domainname > /etc/defaultdomain
The first time you start
ypinit
, it will need to get it's naming information from local files: # cp /etc/nsswitch.files /etc/nsswitch.conf
Add entries for all NIS slave servers to
/etc/hosts
. Start the NIS master server processes:
(Generic Solaris commands are in black, Solaris 8-specific are in purple, and Solaris 10-specific are in green.)
# /usr/sbin/ypinit -m
# cp /etc/nsswitch.nis /etc/nsswitch.conf
# /usr/lib/netsvc/yp/ypstart(or)
# svcadm enable nis/server
# svcadm enable nis/client
Check that the NIS server is working:
# ypcat passwd
The output should contain all the entries in
/etc/nis_etc/passwd
. set up and start a NIS slave server
On each prospective NIS slave server (and you'll need at least one):# domainname yourdomain
# domainname > /etc/defaultdomain
Edit
/etc/hosts
to include entries for the NIS master and any other slaves you plan to add. To get the initial copies of the maps from the newly-created master, each prospective slave needs to be set up as a client:
# /usr/sbin/ypinit -c
You will be asked for a list of NIS servers. Add the hostname of the slave you're working on (i.e the current host) first, then the master, then the remaining slaves, with those closest on the network first.
Stop ypbind if necessary.
(Generic Solaris commands are in black, Solaris 8-specific are in purple, and Solaris 10-specific are in green.)
# /usr/lib/netsvc/yp/ypstop(or)
# svcadm disable nis/client
Start it (again), and initialise the new slave:
# /usr/lib/netsvc/yp/ypstart(or)
# svcadm enable nis/client
# /usr/sbin/ypinit -s nismaster
Start
ypserv
and then put the correct nsswitch.conf
file in place. You ought to edit the original /etc/nsswitch.nis
beforehand (for example as described earlier), if you made changes to /etc/nsswitch.nis
on the master server. # /usr/lib/netsvc/yp/ypstop
# /usr/lib/netsvc/yp/ypstart(or)
# svcadm disable nis/server
# svcadm enable nis/server
# cp /etc/nsswitch.nis /etc/nsswitch.conf
set up and start a NIS client host
Remove a client from an existing NIS domain
# rm /etc/defaultdomain# cd /var/yp
# pwd
/var/yp
# vi aliases (remove any uncommented entries)
The resulting file should look something like this:
# Aliases file- database of full length and truncated length domain and #ident "@(#)aliases 1.2 92/07/14 SMI" #map names. Accessed by YP commands.# rm -r domainname (if this directory exists)
# rm -r *.time (if any of these files exists)
# cd /var/yp/binding
# pwd
/var/yp/binding
# rm -r *
# cp /etc/nsswitch.files /etc/nsswitch.conf
Reboot the server.
Add a client to the new NIS domain
First make sure that the potential NIS client does not belong to an existing NIS domain. Remove it using the instructions above if required.Edit
/etc/hosts
to include entries for the NIS master and all slaves. # domainname yourdomain
# domainname > /etc/defaultdomain
# ypinit -cAdd the following hosts:
- nis_master
- nis_slave
- nis_other_slave
- nis_master-a (other interface, if available)
- nis_slave-a (other interface, if available)
- nis_other_slave-a (other interface, if available)
Start NIS
(Generic Solaris commands are in black, Solaris 8-specific are in purple, and Solaris 10-specific are in green.)
# cp /etc/nsswitch.nis /etc/nsswitch.conf (edit the original
/etc/nsswitch.nis
beforehand as described earlier)# /usr/lib/netsvc/yp/ypstart (normally called from
/etc/init.d/rcp
)(or)
# svcadm enable nis/client
# ypwhich should return
nis_master
Also the following commands:
# ypcat hosts
# ypcat passwdshould return lots of relevant data.
useful links
Securing NIS by Doug Hughes. This is a very useful resource.Solaris Network Information Services (NIS) Implementation (Lots of links, FAQs, recommended reading, etc. Some of this is outdated, but there is some interesting stuff here.)
Solaris NIS Minitutorial (Linked in from above. Worth reading if you have little or no exposure to NIS on Solaris.)
docs.sun.com: System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP)
If you modify entries in the /var/yp/securenets file, you must kill and restart the ypserv and ypxfrd daemons.
Thursday, August 9, 2012
removed msi-x driver option from the Broadcom bnx interface card
From that I would conclude that this is most likely the cause of the previous issues with this system being intermittently disconnected from the network during periods of high load.
The fix will therefore stay in place, until TEM patch the system (this issue is fixed in RHEL 5.6). I don’t believe TEM have a timescale for patching Linux systems.
Fix:
In /etc/modprobe.conf
# Added option to disable msi-x as suspected it causing network failure.
# should be fixed in RHEL5.6 and above
options bnx2 disable_msi=1
Thursday, August 2, 2012
Autosys: disable auto load in jobscape
On Solaris in file: /usr/openwin/lib/app-defaults/Xpert
! reload job definitions from the database automatically
! (1 = yes, 0 = no)
Xpert.autoReloadJobDefs: 0
Wednesday, August 1, 2012
Autosys Can Only Run 10 Jobs
On Autosys Server:
Solution:
inetadm -M connection_backlog=30
# inetadm -p
NAME=VALUE
bind_addr=""
bind_fail_max=-1
bind_fail_interval=-1
max_con_rate=-1
max_copies=-1
con_rate_offline=-1
failrate_cnt=40
failrate_interval=60
inherit_env=TRUE
tcp_trace=TRUE
tcp_wrappers=TRUE
connection_backlog=10
# inetadm -l svc:/network/auto_remote/tcp:default
SCOPE NAME=VALUE
name="auto_remote"
endpoint_type="stream"
proto="tcp"
isrpc=FALSE
wait=FALSE
exec="/opt/CA/autosys/bin/auto_remote"
user="root"
default bind_addr=""
default bind_fail_max=-1
default bind_fail_interval=-1
default max_con_rate=-1
default max_copies=-1
default con_rate_offline=-1
default failrate_cnt=40
default failrate_interval=60
default inherit_env=TRUE
default tcp_trace=TRUE
default tcp_wrappers=TRUE
default connection_backlog=10
Solution:
inetadm -M connection_backlog=30
Subscribe to:
Posts (Atom)