Thursday, August 30, 2012

Linux: Kill Orphaned Socket FIN_WAIT1

tcp_orphan_retries setting controls how many attempts will be done before a server-less port is released. By default, it is 0, to kill the Orphaned Socket, set 1 in tcp_orphan_retries.

# cd /proc/sys/net/ipv4
# echo 1 > tcp_orphan_retries

# netstat -p |grep FIN_WAIT

if all the orphaned socket is gone, restore tcp_orphan_retries

# echo 0 > tcp_orphan_retries

Wednesday, August 29, 2012

MQSeries 7: Journal

http://publib.boulder.ibm.com/infocenter/ieduasst/v1r1m0/index.jsp?topic=/com.ibm.iea.wmq_v7/wmq/7.0.1/Details/iea_701_120_multi_instancei/player.html
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.amqzag.doc%2Ffa70162_.htm
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/topic/com.ibm.mq.amqzag.doc/fa70154_.htm
/var/mqm/mqs.ini
AllQueueManagers:
   #********************************************************************#
   #* The path to the qmgrs directory, below which queue manager data  *#
   #* is stored                                                        *#
   #********************************************************************#
   DefaultPrefix=/var/mqm
LogDefaults:
   LogDefaultPath=/var/mqm/log
QueueManager:
   Name=QMANAGER1
   Prefix=/var/mqm
   Directory=QMANAGER1
   DataPath=/var/mqm/share/qmgrs/QMANAGER1
$ pwd
/var/mqm/share/qmgrs/QMANAGER1
$ cat qm.ini
#*******************************************************************#
#* Module Name: qm.ini                                             *#
#* Type       : WebSphere MQ queue manager configuration file      *#
#  Function   : Define the configuration of a single queue manager *#
#*                                                                 *#
#*******************************************************************#
#* Notes      :                                                    *#
#* 1) This file defines the configuration of the queue manager     *#
#*                                                                 *#
#*******************************************************************#
ExitPath:
   ExitsDefaultPath=/var/mqm/exits/
   ExitsDefaultPath64=/var/mqm/exits64/
#*                                                                 *#
#*                                                                 *#
Log:
   LogPrimaryFiles=31
   LogSecondaryFiles=21
   LogFilePages=32000
   LogType=LINEAR
   LogBufferPages=0
   LogPath=/var/mqm/share/log/QMANAGER1/
   LogWriteIntegrity=TripleWrite
Service:
   Name=AuthorizationService
   EntryPoints=13
ServiceComponent:
   Service=AuthorizationService
   Name=MQSeries.UNIX.auth.service
   Module=/opt/mqm/lib64/amqzfu
   ComponentDataSize=0
CHANNELS:
  MaxChannels = 1000
TuningParameters:   
     FileLockHeartBeatLen=30
$

Logs
$ pwd
/var/mqm/share/qmgrs/QMANAGER1/errors
$ ls
AMQERR01.LOG  AMQERR02.LOG  AMQERR03.LOG


$ pwd
/var/mqm/share/qmgrs/QMANAGER1
$ ls -l
total 38
drwxrwsr-x+  2 mqm      mqm           96 Sep  9  2010 @ipcc
-rw-rw-rw-   1 mqm      mqm           34 Aug 28 10:02 active
-rw-r-----   1 mqm      mqm           56 Aug 28 10:02 amqalchk.fil
drwxrws---+  2 mqm      mqm           96 Sep  9  2010 authinfo
drwxrws---+  2 mqm      mqm         1024 Jul 30 12:48 channel
drwxrws---+  2 mqm      mqm         1024 Jul 30 13:36 clntconn
drwxrws---+  2 mqm      mqm           96 Aug 28 09:57 errors
drwxrws---+  2 mqm      mqm           96 Feb  6  2012 listener
-rw-rw-rw-   1 mqm      mqm           34 Aug 28 10:02 master
drwxrws---+  2 mqm      mqm         1024 Sep  9  2010 namelist
drwxrwsr-x+  2 mqm      mqm           96 Sep  9  2010 plugcomp
drwxrws---+  2 mqm      mqm           96 Sep  9  2010 procdef
-rw-r-----   1 mqm      mqm         1390 Aug 28 09:46 qm.ini
-rw-r-----   1 mqm      mqm         1340 Aug 28 09:41 qm.ini.20120828
drwxrws---+  2 mqm      mqm           96 Sep  9  2010 qmanager
-rw-r-----   1 mqm      mqm          693 Aug 28 10:02 qmstatus.ini
drwxrws---+ 93 mqm      mqm         4096 Aug 14 16:26 queues
drwxrwx---+  2 mqm      mqm         1024 Jul 31 08:39 scratch
drwxrws---+  2 mqm      mqm           96 Sep  9  2010 services
drwxrwsr-x+  2 mqm      mqm         1024 Jul 18 15:46 ssl
-rw-rw-rw-   1 mqm      mqm           34 Aug 28 10:03 standby
drwxrwsr-x+  2 mqm      mqm           96 Aug 28 10:03 startprm
drwxrws---+  2 mqm      mqm         1024 Sep  9  2010 topic
$ cat qmstatus.ini
AuthorityData:
   Creator=mqm
QueueManagerStatus:
   CurrentStatus=Running
   PermitStandby=Yes
   PermitFailover=Yes
   PlatformSignature=8195
   PlatformString=SunOS 5.10
ManagedSets:
   QMANAGER1/@ipcc.IPCCPSet=41097
   QMANAGER1/@ipcc.IPCCP64S=914176
   QMANAGER1/@qmgr.ZDMPipe=7677
   QMANAGER1/@qmgr.OAMshmem=33134
   QMANAGER1/@qmgr.OAMPipe=11704
   QMANAGER1/@qmgr.KernelSet=5591856
   QMANAGER1/@qmgr.TopicSet=663808
   QMANAGER1/@qmgr.SelectorSet=560
   QMANAGER1/@qmgr.ObjectCatalogue=1979280
   QMANAGER1/@qmgr.QueueSessionSegment=3526888
   QMANAGER1/@qmgr.TransactionSessionSegment=194004
GhostPools:
   SYSTEM.DEFAULT.MODEL.QUEUE=3
   SYSTEM.MQEXPLORER.REPLY.MODEL=3

$ dspmq -x -m QMANAGER1
QMNAME(QMANAGER1)                                        STATUS(Running)
    INSTANCE(cipgwuatmq01) MODE(Active)
    INSTANCE(cipgwuatmq02) MODE(Standby)
$endmqm


root@artmqserver1:/root/MQ/bin:# cat restart-QMGR.sh
#!/bin/sh
if [ $# != 2 ]; then
echo "Usage: $0 <QMGR name> <port>"
exit 1
fi
QMGR=$1
PORT=$2
endmqm -p $QMGR
endmqlsr -m $QMGR
strmqm $QMGR
nohup runmqlsr -t tcp -p ${PORT} -m ${QMGR} >/dev/null 2>&1 &

amqiclen -x -m QMGR

runmqsc QMGR
 --> display channel(*)
end to quit runmqsc
crtmqm -lc  -lp 5 -ls 3 -u DEAD.LETTER.QUEUE  -ld /var/mqm/share/log -md /var/mqm/share/qmgrs W
RAP.DEV.QMGR01
addmqinf -s QueueManager -v Name=WRAP.DEV.QMGR01 -v Directory=WRAP\!DEV\!QMGR01 -v Prefix=/var/
mqm -v DataPath=/var/mqm/share/qmgrs/WRAP\!DEV\!QMGR01
strmqm -x WRAP.DEV.QMGR01

cat /usr/bin/runmqm
cat /var/run/.run_mq

gsk7cmd -cert -list -db key.kdb -pw passw0rd
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust yes
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>-trust true
gsk7cmd
gsk7cmd -cert -add
vi /tmp/ca.pem
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>-trust true
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>
vi /tmp/ca.pem
gsk7cmd -cert -add -label FRB -file /tmp/ca.pem -db key.kdb -pw <your password>
gsk7cmd -cert -add -label WSMQM -file /tmp/cert.pem -db key.kdb -pw <your password>
gsk7cmd -cert -list -db key.kdb -pw password
gsk7cmd -cert -list -db key.kdb -pw passw0rd
gsk7cmd -cert -details -label ibmwebspheremqQMANAGER1 -db key.kdb -pw passw0rd
endmqm -i QMANAGER1
endmqm -i QMANAGER1
export JAVA_HOME=/opt/mqm/ssl
cd /var/mqm/share/qmgrs/
keytool
keytool -list -keystore key.kdb -storpass passw0rd
keytool -list -keystore key.kdb -storepass passw0rd
keytool -list -keystore key.kdb -storepass <your password>-stoetype CMS
keytool -list -keystore key.kdb -storepass <your password>-storetype CMS
keytool -list -keystore key.kdb -storepass <your password>-storetype JMS
exit

/etc/inittab
mq0:3:respawn:/usr/bin/runmqm > /dev/null 2>&1 #Autostart MQ Multi-Instance Monitor
#mq2:3:respawn:/usr/bin/runmqlsr -t tcp -p 1415 -m QMANAGER1 >/dev/null 2>&1  #Autostart MQ Listner


below is NFS Veritas Cluster Configuration for MQ multiple instances fail over.
group MQnfsSG (
        SystemList = { cipgwuat2ap1 = 0, cipgwuat2ap2 = 1 }
        AutoStartList = { cipgwuat2ap1 }
        )
        Application NFSApp (
                StartProgram = "/apps/cluster/NFSOnline"
                StopProgram = "/apps/cluster/NFSOffline"
                MonitorProgram = "/apps/cluster/NFSmonitor"
                )
        IP MQnfsIP (
                Device = ce0
                Address = "10.115.199.42"
                NetMask = "255.255.255.0"
                )
        requires group CIPG_DATA online local firm
        MQnfsIP requires NFSApp

Solaris: starting/controlling services

SMF: starting/controlling services In most Unix systems, startup scripts in /etc/rc3.d, etc, are used to start and stop services. Solaris 10 uses a different approach. There are two advantages to the Solaris 10 method: The system can come up faster, because startup of various systems can be done in parallel.
The system knows more about what is going on. It can monitor processes and restart them.
Services are managed by svcadm. The most common commands are svcadm enable SERVICE
svcadm disable SERVICE Note that enable and disable are persistent. That is, if you enable a service it will be brought back up after a reboot. Similarly with disabling. If you want to stop a service but have it come back up after the next reboot, use "svcadm -t disable SERVICE". That stops it temporarily.
To look at services, two common commands are svcs {lists summary of all}
svcs -l SERVICE {details on one service}
Solaris 10 still pays attention to /etc/rcN.d, but services defined there are "legacy", and can't be fully monitored and controlled. To define a service, you create an XML file that specifies dependencies, and methods to start and stop it. Then you do "svccfg import FOO.xml". Normally the XML file is written to create an instance but not enable it. So if the import works, you would need to do "svcadm enable SERVICE" to start it. A good way to start writing the XML file is to look at existing ones. They are in subdirectories of /var/svc/manifest. Sun suggests system/utmp.xml as a simple example. Since many of your services may be network services, take a look at what is in network. In network, there are two types of services, some that are standard daemons (e.g. http_apache2) and some there are run by inet (e.g. telnet). If you add services, you probably want to put your .xml files in /var/svc/manifest and your scripts in /lib/svc/method. That way anyone who needs to work with the system can find them, just as they now know to look in /etc/init.d for all startup scripts. However I suggest making those symbolic links to files that are actually in /usr/local/svc/manifest and /usr/local/svc/method. That way you won't lose your information in a system reinstallation. I suggest two pages in Sun's BIGADMIN site: Predictive Self-Healing. This is the best brief introduction to the SMF system.
Configuring JBoss to use with SFM Step by step instructions to configure a typical daemon. Note there is one possible error in the XML. One dependency is given a name of "ssh_multi-user-server". I had problems until I changed the name. I suggest SERVICE_multi-user-server, where SERVICE is your service name.
Most of the XML files refer to scripts to do start, stop and restart. The Sun scripts all reside in /lib/svc/method. It's worth looking at some of the examples. There are two standard approaches: a script that just starts the service. You then use :kill in the XML file to stop it. This causes all processes started by the start script to be killed. Or you have a script that looks a lot like a traditional init.d script, which is called as "SCRIPT start" and "SCRIPT stop". Use that if you need to do something beyond just killing the process. For a quick conversion you can use the example above, and call the init.d script with start and stop. However you may want to change the script slightly: Be careful about return values. You should return 0 if the action is taken. (With stop you should return 0 even if the process was already stopped.) Otherwise you'll need to return a value defined by SMF.
When starting and stopping, don't return until the process is running and ready to respond to user requests, or until it's really dead. If you need to wait, make sure you use a timeout.
The second item is critical if other processes depend upon this one, since they'll go on to the dependencies as soon as the start process returns. If there are no dependencies, you can get away with a simple init.d script, as long as it returns 0.
Note that if all processes started by a service die, the system will try to restart the service by doing a stop and then a start. You can also define a "refresh" action, which prods a service if a configuration file changes.

Thursday, August 23, 2012

FormLogin is configured for web application isclite but SSO is not enabled in the global security settings


FormLoginExte E   SECJ0154E: SSO Configuration error. FormLogin is configured for web application isclite but SSO is not enabled in the global security settings.  SSO must be enabled to use FormLogin.
check the security.xml, and found

<singleSignon xmi:id="SingleSignon_1" requiresSSL="false" domainName="" enabled="false"/>

change enabled from false to true, then restart Deployment manager.
<singleSignon xmi:id="SingleSignon_1" requiresSSL="false" domainName="" enabled="true"/>

WebSphere Jython Programing

http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.express.doc/info/exp/ae/cxml_jython.html


Below script add destination to bus "ServiceBus"

=================================================================
import sys
for dest in ["ActionServiceBusQueue", "AuditServiceBusQueue", "EngineErrorServiceQueue", "DaemonErrorServiceBusQueue", "ActionErrorServiceBusQueue", "AuditErrorServiceBusQueue", "LGBusQueue"]:
    print dest
    AdminTask.createSIBDestination('[-bus ServiceBus -name ' + dest + ' -type Queue -reliability ASSURED_PERSISTENT -description ' + dest + ' -node LTSLUAT2AppNode1 -server LTSLUAT2AppServer1 ]')
    AdminConfig.save()
# End for

Jython 2-D Array
=================================================================
from java.lang.reflect import Arrayimport javarows = 3cols = 3str2d = java.lang.reflect.Array.newInstance(java.lang.String,[rows, cols])str2d[0][0] = "python"str2d[1][0] = "jython"str2d[2][0] = "java"str2d[0][1] = "syntax "str2d[1][1] = "strength"str2d[2][1] = "libraries"str2d[0][2] = "unclutter"str2d[1][2] = "combine"str2d[2][2] = "graphics"print str2dprint "printing multidimensional array"for i in range(len(str2d)):    for j in range(len(str2d[i])):        print str2d[i][j]+"\t",    printprint

Source -> Destination
==================================================================

import sys
for dest in ["DaemonQueue DaemonServiceBusQueue", "ActionQueue ActionServiceBusQueue", "AuditQueue AuditServiceBusQueue", "EngineErrorQueue EngineErrorServiceQueue", "DaemonErrorQueue DaemonErrorServiceBusQueue", "AuditErrorQueue AuditErrorServiceBusQueue", "ActionErrorQueue ActionErrorServiceBusQueue", "LG_Queue1 LGBusQueue"]:
    print dest
    entry=dest.split(' ')
    s=entry[0]
    d=entry[1]
    print "========"
    print s; print "->"; print d


Check Server Status
====================================================================
Usage: /opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -lang jython -profile serverStatus.py -c "serverStatus()"


serverStatus.py

import re;
def serverStatus() :
    pat = re.compile(r'^(w+)(cells/(w+)/nodes/(w+)/servers/1.*)$');
    info   = [];
    maxLen = [ 0 ] * 3;
    servers = AdminConfig.list('Server').splitlines();
    #print servers;
    for server in servers :
       #print server;
       oName = AdminConfig.getObjectName(server);
       #print oName;
       if oName != '' :
         status = 'running';
       else :
         status = 'stopped';
       #print status
       mObj = pat.match(server);
       if mObj :
           (sName, cName, nName) = mObj.groups();
           info.append((sName, cName, nName, status));
           for i in range(3) :
               L = len(mObj.group(i + 1));
               if L > maxLen[ i ] : maxLen[ i ] = L;
           print '%(sName)s | %(cName)s | %(nName)s | %(status)s' % locals();
       else:
           print "no Matching."

Tuesday, August 21, 2012

WAS ND 8 - Installation

./installc -acceptLicense  -silent -input goweekend_install.xml
./IBMIM -skipInstall /var/tmp/imRegistry -record /var/tmp/aniu.xml
./imcl -input /var/tmp/aniu.xml -log /var/tmp/aniu.log

Linux VM Provisioning


cat ~/.ssh/id_dsa.pub
vi ~/.ssh/authorized_keys
echo pass\&w0rd | passwd root --stdin
ssh-keygen -t rsa -f ~/.ssh/id_rsa -N "" -b 2048

lvresize -L +2G /dev/mapper/rootvg-varlv; resize2fs /dev/mapper/rootvg-varlv
lvresize -L +2G /dev/mapper/rootvg-tmplv; resize2fs /dev/mapper/rootvg-tmplv

 fdisk /dev/sdc < /home/aniu01/LinuxProvisioning/newdisk.sh

n
p
1
t
8e
w
echo - - - > /sys/class/scsi_host/host0/scan

fdisk -l
fdisk /dev/sdb
pvcreate /dev/sdb1
vgcreate APPVG /dev/sdb1
lvcreate -l 100%FREE -n APPLV APPVG
mkfs.ext3 /dev/APPVG/APPLV
fdisk /dev/sdd
fdisk -l
pvcreate /dev/sdd1
vgcreate MIDWAREVG /dev/sdd1
lvcreate -l 100%FREE -n  MIDWARELV  MIDWAREVG
mkfs.ext3 /dev/MIDWAREVG/MIDWARELV
mkfs -t ext3 /dev/MIDWAREVG/MIDWARELV
fdisk /dev/sdc
fdisk -l
pvcreate /dev/sdc1
vgcreate APPLOGVG /dev/sdc1
lvcreate -l 100%FREE -n  APPLOGLV APPLOGVG
mkfs -t ext3 /dev/APPLOGVG/APPLOGLV
fdisk /dev/sdc
fdisk -l
pvcreate /dev/sdc1
vgcreate DATAVG /dev/sdc1
lvcreate -l 100%FREE -n  DATALV DATAVG
mkfs -t ext3 /dev/DATAVG/DATALV

chkconfig portmap on;chkconfig ypbind on;chkconfig autofs on
nisdomainname ts.bo.com; domainname ts.bo.com
echo NISDOMAIN=goweekend.ca >> /etc/sysconfig/network
cd /etc; mv auto.home auto.home.orig
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluacog01:/etc; cd ..
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluaapp02:/etc; cd ..
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluaweb01:/etc; cd ..
cd LinuxProvisioning/; scp auto.* yp.conf resolv.conf nsswitch.conf root@cmsfbccluaweb02:/etc; cd ..

  systool -c fc_host -v
lspci
dmidecode -t system

============================================================================================
Oracle requirements
groupadd dba; useradd -d /export/home/oracle -g dba oracle; echo dba4client |passwd oracle --stdin
echo dba4client | passwd oracle --stdin
cd /opt; mkdir oracle oraInventory; chown -R oracle:dba oracle oraInventory

yum install binutils* compat-libstdc++* elfutils-libelf* gcc-* glibc* ksh* libaio* libgcc* libgomp* libstdc* make* sysstat* unixODBC-2.2.11 unixODBC-devel-2.2.11 -y


echo "kernel.shmall = 4294967296
kernel.shmmax = 68719476736
kernel.shmmni = 4096
kernel.sem = 250 32000 100 1024
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range =9000 65500
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
" >> /etc/sysctl.conf
sysctl -p
unzip /net/todnfs03/vol/archive/sw_archive/Oracle_Client/linux.x64_11gR2_client.zip
./runInstaller
runInstaller -silent -responseFile /tmp/oracle-admin.rsp
32 bits client
tar -xf /net/todnfs03/vol/archive/sw_archive/Oracle_Client/Oracle_Client_Linux_32bits_11g202.tar
./runInstaller -silent -nowelcome -responsefile /home/aniu01/cmsf/oracle_client.rsp
Oracle base: /opt/oracle
Software Locatioin: /opt/oracle/product/11.2.0.2/client
============================================================================================
sendmail
vi /etc/mail/sendmail.cf
# "Smart" relay host (may be null)
DSmailhost.goweekend.ca

echo "Let me know if you have got this email" |mailx -s "test from `hostname`" arthur.niu@gmail.com

/opt/quest/bin/vastool -u adm-ytan03 join -c "OU=CTD CM & RMG UNIX,OU=UnixServers,OU=Servers,DC=adroot,DC=goweekend,DC=ca" -n `hostname`  office.goweekend.ca

Friday, August 17, 2012

Linux Cached Memory

In Linux, reading from a disk is very slow compared to accessing real memory. In addition, it is common to read the same part of a disk several times during relatively short periods of time. For example, one might first read an e-mail message, then read the letter into an editor when replying to it, then make the mail program read it again when copying it to a folder. Or, consider how often the command ls might be run on a system with many users. By reading the information from disk only once and then keeping it in memory until no longer needed, one can speed up all but the first read. This is called disk buffering, and the memory used for the purpose is called the buffer cache.
Unlike Windows other operating systems, Linux administers memory the smartest way it can.
Since unused memory is next to worthless, the filesystem takes whatever memory is left and caches it in order to speed up disk access. When the cache fills up, the data that has been unused for the longest time is discarded and the memory thus freed is used for the new data.
Whenever an application needs memory, the kernel makes the cache smaller; You do not need to do anything to make use of the cache, it happens completely automatically.
Freeing memory buffer does not make your programs faster… Actually it makes disk access slower.
BUT if for some reason (kernel debugging for example) you want to force the buffer to be freed, you need to set the drop_caches value to 1:
$ echo 1 > /proc/sys/vm/drop_caches

Thursday, August 16, 2012

Solaris: NIS installation and configuration

Solaris: NIS installation and configuration

Thursday, August 9, 2012

removed msi-x driver option from the Broadcom bnx interface card


From that I would conclude that this is most likely the cause of the previous issues with this system being intermittently disconnected from the network during periods of high load.

The fix will therefore stay in place, until TEM patch the system (this issue is fixed in RHEL 5.6). I don’t believe TEM have a timescale for patching Linux systems.

Fix:
In /etc/modprobe.conf
# Added option to disable msi-x as suspected it causing network failure.
# should be fixed in RHEL5.6 and above
options bnx2 disable_msi=1

Thursday, August 2, 2012

Autosys: disable auto load in jobscape


On Solaris in file: /usr/openwin/lib/app-defaults/Xpert

! reload job definitions from the database automatically
! (1 = yes, 0 = no)
Xpert.autoReloadJobDefs: 0

Wednesday, August 1, 2012

Autosys Can Only Run 10 Jobs

On Autosys Server:
# inetadm -p
NAME=VALUE
bind_addr=""
bind_fail_max=-1
bind_fail_interval=-1
max_con_rate=-1
max_copies=-1
con_rate_offline=-1
failrate_cnt=40
failrate_interval=60
inherit_env=TRUE
tcp_trace=TRUE
tcp_wrappers=TRUE
connection_backlog=10
# inetadm -l svc:/network/auto_remote/tcp:default
SCOPE    NAME=VALUE
         name="auto_remote"
         endpoint_type="stream"
         proto="tcp"
         isrpc=FALSE
         wait=FALSE
         exec="/opt/CA/autosys/bin/auto_remote"
         user="root"
default  bind_addr=""
default  bind_fail_max=-1
default  bind_fail_interval=-1
default  max_con_rate=-1
default  max_copies=-1
default  con_rate_offline=-1
default  failrate_cnt=40
default  failrate_interval=60
default  inherit_env=TRUE
default  tcp_trace=TRUE
default  tcp_wrappers=TRUE
default  connection_backlog=10

Solution:
inetadm -M connection_backlog=30