安装oracle 11.2.0.1 RAC和11.2.0.4 RAC on aix 7.1

联系:QQ(5163721)

标题:安装oracle 11.2.0.1 RAC和11.2.0.4 RAC on aix 7.1

作者:Lunar©版权所有[文章允许转载,但必须以链接方式注明源地址,否则追究法律责任.]

<head profile=”http://gmpg.org/xfn/11″>

<style type=”text/css”>#header { background: url(http://feed.askmaclean.com/wp-content/themes/minimum/images/header.png) no-repeat !important; }</style>

</head>

lunardb2/#oslevel -r

7100-02
lunardb2/#

/usr/sbin/lsattr -E -l sys0 -a realmem
/usr/sbin/lsps -a

/usr/bin/df -g

/usr/bin/df -g /tmp

bootinfo -K

lsattr -El rhdiskpower0 -a size_mb
lsattr -El hdiskpower0

/usr/sbin/no -a | fgrep ephemeral
# /usr/sbin/no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500
# /usr/sbin/no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500

lunardb2/#/usr/sbin/no -a | fgrep ephemeral
tcp_ephemeral_high = 65500
tcp_ephemeral_low = 9000
udp_ephemeral_high = 65500
udp_ephemeral_low = 9000
lunardb2/#

ioo -o aio_maxreqs

ps -ek|grep -v grep|grep –v posix_aioserver|grep -c aioserver

从AIX 6.1以后,下属值貌似是缺省值了,跟Oracle install guide一致,因此无需修改:
vmo -p -o minperm%=3
vmo -p -o maxperm%=90
vmo -p -o maxclient%=90
vmo -p -o lru_file_repage=0
vmo -p -o strict_maxclient=1
vmo -p -o strict_maxperm=0
检查vmo的设置:
lunardb2/#vmo -a
ame_cpus_per_pool = n/a
ame_maxfree_mem = n/a
ame_min_ucpool_size = n/a
ame_minfree_mem = n/a
ams_loan_policy = n/a
enhanced_affinity_affin_time = 1
enhanced_affinity_vmpool_limit = 10
esid_allocator = 1
force_relalias_lite = 0
kernel_heap_psize = 65536
lgpg_regions = 0
lgpg_size = 0
low_ps_handling = 1
maxfree = 1088
maxperm = 21912435
maxpin = 22729495
maxpin% = 90
memory_frames = 25165824
memplace_data = 0
memplace_mapped_file = 0
memplace_shm_anonymous = 0
memplace_shm_named = 0
memplace_stack = 0
memplace_text = 0
memplace_unmapped_file = 0
minfree = 960
minperm = 730410
minperm% = 3
nokilluid = 0
npskill = 131072
npswarn = 524288
num_locks_per_semid = 1
numpsblks = 16777216
pinnable_frames = 23496532
relalias_percentage = 0
scrub = 0
v_pinshm = 0
vmm_default_pspa = 0
vmm_klock_mode = 2
wlm_memlimit_nonpg = 1
lunardb2/#

lunardb2/#vmstat -v
25165824 memory pages
24347152 lruable pages
22149779 free pages
10 memory pools
1669293 pinned pages
90.0 maxpin percentage
3.0 minperm percentage
90.0 maxperm percentage
5.0 numperm percentage
1233770 file pages
0.0 compressed percentage
0 compressed pages
5.0 numclient percentage
90.0 maxclient percentage
1233770 client pages
0 remote pageouts scheduled
31 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2228 filesystem I/Os blocked with no fsbuf
4192 client filesystem I/Os blocked with no fsbuf
6 external pager filesystem I/Os blocked with no fsbuf
7.1 percentage of memory used for computational pages
lunardb2/#
lunardb2/#/usr/sbin/lsattr -E -l sys0 -a maxuproc
maxuproc 16384 Maximum number of PROCESSES allowed per user True
lunardb2/#

修改/etc/rc.net文件,使设置永久生效:
if [ -f /usr/sbin/no ] ; then
/usr/sbin/no -o udp_sendspace=131072
/usr/sbin/no -o udp_recvspace=1310720
/usr/sbin/no -o tcp_sendspace=65536
/usr/sbin/no -o tcp_recvspace=65536
/usr/sbin/no -o rfc1323=1
/usr/sbin/no -o sb_max=4194304
/usr/sbin/no -o ipqmaxlen=512
fi

-r表示reboot后生效,-p表示即刻生效:
/usr/sbin/no -r -o ipqmaxlen=512
/usr/sbin/no -p -o sb_max=4194304
/usr/sbin/no -p -o udp_sendspace=131072
/usr/sbin/no -p -o udp_recvspace=1310720
/usr/sbin/no -p -o tcp_sendspace=65536
/usr/sbin/no -p -o tcp_recvspace=65536
/usr/sbin/no -p -o rfc1323=1
lunardb2/#no -a|grep space
tcp_recvspace = 65536
tcp_sendspace = 65536
udp_recvspace = 1310720
udp_sendspace = 131072
lunardb2/#

检查其他kernal设置:
lunardb2/#lsattr -El sys0
SW_dist_intr false Enable SW distribution of interrupts True
autorestart true Automatically REBOOT OS after a crash True
boottype disk N/A False
capacity_inc 1.00 Processor capacity increment False
capped true Partition is capped False
chown_restrict true Chown Restriction Mode True
conslogin enable System Console Login False
cpuguard enable CPU Guard True
dedicated true Partition is dedicated False
enhanced_RBAC true Enhanced RBAC Mode True
ent_capacity 18.00 Entitled processor capacity False
frequency 6400000000 System Bus Frequency False
fullcore false Enable full CORE dump True
fwversion IBM,AM770_048 Firmware version and revision levels False
ghostdev 0 Recreate ODM devices on system change / modify PVID True
id_to_partition 0X800009C30A300001 Partition ID False
id_to_system 0X800009C30A300000 System ID False
iostat false Continuously maintain DISK I/O history True
keylock normal State of system keylock at boot time False
log_pg_dealloc true Log predictive memory page deallocation events True
max_capacity 24.00 Maximum potential processor capacity False
max_logname 9 Maximum login name length at boot time True
maxbuf 20 Maximum number of pages in block I/O BUFFER CACHE True
maxmbuf 0 Maximum Kbytes of real memory allowed for MBUFS True
maxpout 8193 HIGH water mark for pending write I/Os per file True
maxuproc 16384 Maximum number of PROCESSES allowed per user True 这个是手工修改的
min_capacity 12.00 Minimum potential processor capacity False
minpout 4096 LOW water mark for pending write I/Os per file True
modelname IBM,9117-MMC Machine name False
ncargs 256 ARG/ENV list size in 4K byte blocks True 这个比文档要求大了
nfs4_acl_compat secure NFS4 ACL Compatibility Mode True
ngroups_allowed 128 Number of Groups Allowed True
os_uuid 15398297-d4a2-464a-986e-756d892be8e7 N/A True
pre430core false Use pre-430 style CORE dump True
pre520tune disable Pre-520 tuning compatibility mode True
realmem 100663296 Amount of usable physical memory in Kbytes False
rtasversion 1 Open Firmware RTAS version False
sed_config select Stack Execution Disable (SED) Mode True
systemid IBM,021063E77 Hardware system identifier False
variable_weight 0 Variable processor capacity weight False
lunardb2/#

检查磁盘属性:
lunardb2/#lsattr -El hdiskpower0
PR_key_value none Reserve Key. True
clr_q yes Clear Queue (RS/6000) True
location Location True
lun_id 0×1000000000000 LUN ID False
lun_reset_spt yes FC Forced Open LUN True
max_coalesce 0×100000 Maximum coalesce size True
max_transfer 0×100000 Maximum transfer size True
pvid none Physical volume identifier False
pvid_takeover yes Takeover PVIDs from hdisks True
q_err no Use QERR bit True
q_type simple Queue TYPE False
queue_depth 32 Queue DEPTH True
reassign_to 120 REASSIGN time out value True
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
rw_timeout 40 READ/WRITE time out True
scsi_id 0×20400 SCSI ID False
start_timeout 180 START unit time out True
ww_name 0x5000097408413d24 World Wide Name False
lunardb2/#

lunardb2/#lsattr -El hdiskpower1
PR_key_value none Reserve Key. True
clr_q yes Clear Queue (RS/6000) True
location Location True
lun_id 0×2000000000000 LUN ID False
lun_reset_spt yes FC Forced Open LUN True
max_coalesce 0×100000 Maximum coalesce size True
max_transfer 0×100000 Maximum transfer size True
pvid none Physical volume identifier False
pvid_takeover yes Takeover PVIDs from hdisks True
q_err no Use QERR bit True
q_type simple Queue TYPE False
queue_depth 32 Queue DEPTH True
reassign_to 120 REASSIGN time out value True
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
rw_timeout 40 READ/WRITE time out True
scsi_id 0×20400 SCSI ID False
start_timeout 180 START unit time out True
ww_name 0x5000097408413d24 World Wide Name False
lunardb2/#lsattr -El hdiskpower2
PR_key_value none Reserve Key. True
clr_q yes Clear Queue (RS/6000) True
location Location True
lun_id 0×3000000000000 LUN ID False
lun_reset_spt yes FC Forced Open LUN True
max_coalesce 0×100000 Maximum coalesce size True
max_transfer 0×100000 Maximum transfer size True
pvid none Physical volume identifier False
pvid_takeover yes Takeover PVIDs from hdisks True
q_err no Use QERR bit True
q_type simple Queue TYPE False
queue_depth 32 Queue DEPTH True
reassign_to 120 REASSIGN time out value True
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
rw_timeout 40 READ/WRITE time out True
scsi_id 0×20400 SCSI ID False
start_timeout 180 START unit time out True
ww_name 0x5000097408413d24 World Wide Name False
lunardb2/#
建立用户组,用户和目录(简易版,如果是11.2.0.4以上,rootpre.sh会要求更为细致的组,比如asmadmin等等,具体可参考文档):
mkgroup -’A’ id=’1000′ adms=’root’ oinstall
mkgroup -’A’ id=’1031′ adms=’root’ dba
mkuser id=’1100′ pgrp=’oinstall’ groups=’dba’ home=’/home/grid’ grid
mkuser id=’1101′ pgrp=’oinstall’ groups=’dba’ home=’/home/oracle’ oracle
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

# mkdir -p /u01/app/11.2.0/grid
# chown grid:oinstall /u01/app/11.2.0/grid
# chmod -R 775 /u01/app/11.2.0/grid
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle

修改优先级:
/usr/bin/lsuser -a capabilities grid
# /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid

/usr/bin/lsuser -a capabilities oracle
# /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
修改磁盘数组为grid oinstall(如果是11.2.0.4以上,根据设置的需求,可能会要求更改为 grid dba,是具体设置而定):
lunardb2/#chown grid:oinstall /dev/rhdiskpower[0-9]
lunardb2/#ls -lrt /dev/rhdiskpower[0-9]
crw-rw—- 1 grid oinstall 43, 1 Oct 12 17:00 /dev/rhdiskpower1
crw-rw—- 1 grid oinstall 43, 0 Oct 12 17:00 /dev/rhdiskpower0
crw-rw—- 1 grid oinstall 43, 9 Oct 12 17:00 /dev/rhdiskpower9
crw-rw—- 1 grid oinstall 43, 8 Oct 12 17:00 /dev/rhdiskpower8
crw-rw—- 1 grid oinstall 43, 7 Oct 12 17:00 /dev/rhdiskpower7
crw-rw—- 1 grid oinstall 43, 6 Oct 12 17:00 /dev/rhdiskpower6
crw-rw—- 1 grid oinstall 43, 5 Oct 12 17:00 /dev/rhdiskpower5
crw-rw—- 1 grid oinstall 43, 4 Oct 12 17:00 /dev/rhdiskpower4
crw-rw—- 1 grid oinstall 43, 3 Oct 12 17:00 /dev/rhdiskpower3
crw-rw—- 1 grid oinstall 43, 2 Oct 12 17:00 /dev/rhdiskpower2
lunardb2/#

查看磁盘大小:
lunardb2/#
lunardb2/#bootinfo -s hdiskpower0
5118
lunardb2/#bootinfo -s hdiskpower1
5118
lunardb2/#bootinfo -s hdiskpower2
5118
lunardb2/#

bootinfo -s hdiskpower3
bootinfo -s hdiskpower4
bootinfo -s hdiskpower5
bootinfo -s hdiskpower6
bootinfo -s hdiskpower7
bootinfo -s hdiskpower8
bootinfo -s hdiskpower9

lunardb2/#bootinfo -s hdiskpower9
147457
lunardb2/#
数据分为两部分,DATA_DG存放数据,RECO_DG存放归档和flashback的数据:
VOT和OCR使用SYS_DG, /dev/rhdiskpower[0-2],每块盘分配了5G,共3块盘:
sys_dg: /dev/rhdiskpower[0-2]
data_dg: /dev/rhdiskpower[3-6]
reco_dg: /dev/rhdiskpower[7-9]
实际使用时,ASMCA不能识别到0~2这三个5g的盘,且根据最佳实践,也没有必要创建3个磁盘组
遗留工作:
1,查询mos,是否有相关bug和限制(install guide上只有计算方法和建议设置1g以上,没有说明5g不能被识别的问题)
2,使用这3个小盘尝试手工创建一个磁盘组,进行测试,看看报什么错。。。。。。
3,将这3个小盘删除,重建成一个15g的盘,增加到其他dg上

# chown grid:oinstall /dev/rhdiskpower0
# chmod 660 /dev/rhdiskpower0
# chown root:oinstall /dev/rora_ocr_raw_280m
# chmod 640 /dev/rora_ocr_raw_280m

设置磁盘属性:
lsattr -E -l hdiskpower0| grep reserve
lsattr -E -l hdiskpower1| grep reserve
lsattr -E -l hdiskpower2| grep reserve
lsattr -E -l hdiskpower3| grep reserve
lsattr -E -l hdiskpower4| grep reserve
lsattr -E -l hdiskpower5| grep reserve
lsattr -E -l hdiskpower6| grep reserve
lsattr -E -l hdiskpower7| grep reserve
lsattr -E -l hdiskpower8| grep reserve
lsattr -E -l hdiskpower9| grep reserve

lunardb2/#lsattr -E -l hdiskpower0| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower1| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower2| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower3| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower4| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower5| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower6| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower7| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower8| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#lsattr -E -l hdiskpower9| grep reserve
reserve_policy no_reserve Reserve Policy used to reserve device on open. True
lunardb2/#
lunardb2/#

The response is either a reserve_lock setting, or a reserve_policy setting. If
the attribute is reserve_lock, then ensure that the setting is reserve_lock =
no. If the attribute is reserve_policy, then ensure that the setting is reserve_
policy = no_reserve.
If necessary, change the setting with the chdev command using the following
syntax, where n is the hdisk device number:
chdev -l hdiskn -a [ reserve_lock=no | reserve_policy=no_reserve ]
设置用户限制:
修改/etc/security/limits:
default:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1

oracle:
fsize = -1
core = -1
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1

grid:
fsize = -1
core = -1
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
启动NTP:
修改/etc/rc.tcpip ,主意下面的一行为修改后的:
start /usr/sbin/xntpd “$src_running” “-x”

启动NTP服务:
startsrc -s xntpd -a “-x”
11.2,中,配置SSH需要作如下设置:
By default, OUI searches for SSH public keys in the directory /usr/local/etc/, and
ssh-keygen binaries in /usr/local/bin. However, on AIX, SSH public keys
typically are located in the path /etc/ssh, and ssh-keygen binaries are located in
the path /usr/bin. To ensure that OUI can set up SSH, use the following command to
create soft links:
# ln -s /etc/ssh /usr/local/etc
# ln -s /usr/bin /usr/local/bin

配置root环境变量:
====================================================================
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export AIXTHREAD_SCOPE=S

set -o vi
alias ll=”ls -lrt”

配置grid和oracle用户的环境变量:
asm1
=======================================
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM1
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export PS1=`hostname`”:”\$PWD”>”

export AIXTHREAD_SCOPE=S

set -o vi
alias ll=”ls -lrt”
alias ss=”sqlplus / sysasm”
asm2
=======================================
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM2
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export AIXTHREAD_SCOPE=S

export PS1=`hostname`”:”\$PWD”>”

set -o vi
alias ll=”ls -lrt”
alias ss=”sqlplus / sysasm”
qgdb1
=======================================
export ORACLE_BASE=/u01/app
export ORACLE_HOME=/u01/app/oracle
export ORACLE_SID=qgdb1
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export AIXTHREAD_SCOPE=S

export PS1=`hostname`”:”\$PWD”>”
set -o vi
alias ll=”ls -lrt”
alias ss=”sqlplus / sysdba”
qgdb2
=======================================
export ORACLE_BASE=/u01/app
export ORACLE_HOME=/u01/app/oracle
export ORACLE_SID=qgdb2
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export AIXTHREAD_SCOPE=S

export PS1=`hostname`”:”\$PWD”>”
set -o vi
alias ll=”ls -lrt”
alias ss=”sqlplus / sysdba”

bjdb1
=======================================
export ORACLE_BASE=/u01/app
export ORACLE_HOME=/u01/app/oracle
export ORACLE_SID=bjdb1
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export AIXTHREAD_SCOPE=S

export PS1=`hostname`”:”\$PWD”>”
set -o vi
alias ll=”ls -lrt”
alias ss=”sqlplus / sysdba”
bjdb2
=======================================
export ORACLE_BASE=/u01/app
export ORACLE_HOME=/u01/app/oracle
export ORACLE_SID=bjdb2
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export AIXTHREAD_SCOPE=S

export PS1=`hostname`”:”\$PWD”>”
set -o vi
alias ll=”ls -lrt”
alias ss=”sqlplus / sysdba”

启动安装进程前,使用root运行rootpre.sh,在两个节点分别执行:
# ./rootpre.sh
主意,如果是11.2.0.4,检查相对以前版本更加严格,比如对于SSH还要检查加密算法,对于用户数组,尤其是gird的数组,要求划分更为细致,因此有两个方法:
1,根据11.2.0.4的rootpre运行结果,一次解决问题,然后执行
2,使用11.2.0.1的rootpre分别在集群的所有节点运行

lunardb1:/tmp/oracle/database#./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_13-10-15.15:58:08
Saving the original files in /etc/ora_save_13-10-15.15:58:08….
Copying new kernel extension to /etc….
Loading the kernel extension from /etc

Oracle Kernel Extension Loader for AIX
Copyright (c) 1998,1999 Oracle Corporation
Kernel Extension /etc/pw-syscall.64bit_kernel already loaded, unloading it
Unconfigured the kernel extension successfully
Unloaded the kernel extension successfully
Successfully loaded /etc/pw-syscall.64bit_kernel with kmid: 0x50ef5000
Successfully configured /etc/pw-syscall.64bit_kernel with kmid: 0x50ef5000
The kernel extension was successfuly loaded.

Checking if group services should be configured….
Nothing to configure.
lunardb1:/tmp/oracle/database#
*******************************************************************************
在第一个节点和第二个节点分别执行$ORACLE_BASE/oraInventory/orainstRoot.sh,修改inventory的属组和权限:
lunardb1/#/u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
lunardb1/#

lunardb1/#
在第一个节点执行root.sh:
lunardb1/#/u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2013-10-15 15:30:14: Parsing the host name
2013-10-15 15:30:14: Checking for super user privileges
2013-10-15 15:30:14: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User grid has the required capabilities to run CSSD in realtime mode
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘system’..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘lunardb1′
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.gipcd’ on ‘lunardb1′ succeeded
CRS-2676: Start of ‘ora.mdnsd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.gpnpd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘lunardb1′
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘lunardb1′
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘lunardb1′
CRS-2676: Start of ‘ora.diskmon’ on ‘lunardb1′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.ctssd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.ctssd’ on ‘lunardb1′ succeeded

ASM created and started successfully.

DiskGroup DATA_DG created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘system’..
Operation successful.
CRS-2672: Attempting to start ‘ora.crsd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.crsd’ on ‘lunardb1′ succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 94838a7fc3ec4f3abf5b67c086006d16.
Successfully replaced voting disk group with +DATA_DG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
– —– —————– ——— ———
1. ONLINE 94838a7fc3ec4f3abf5b67c086006d16 (/dev/rhdiskpower3) [DATA_DG]
Located 1 voting disk(s).
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.crsd’ on ‘lunardb1′ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.asm’ on ‘lunardb1′ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.ctssd’ on ‘lunardb1′ succeeded
CRS-2673: Attempting to stop ‘ora.cssdmonitor’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.cssdmonitor’ on ‘lunardb1′ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.cssd’ on ‘lunardb1′ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.gpnpd’ on ‘lunardb1′ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.gipcd’ on ‘lunardb1′ succeeded
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘lunardb1′
CRS-2677: Stop of ‘ora.mdnsd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.mdnsd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.gipcd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.gpnpd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘lunardb1′
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘lunardb1′
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘lunardb1′
CRS-2676: Start of ‘ora.diskmon’ on ‘lunardb1′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.ctssd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.ctssd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘lunardb1′
CRS-2676: Start of ‘ora.asm’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.crsd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.crsd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.evmd’ on ‘lunardb1′
CRS-2676: Start of ‘ora.evmd’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘lunardb1′
CRS-2676: Start of ‘ora.asm’ on ‘lunardb1′ succeeded
CRS-2672: Attempting to start ‘ora.DATA_DG.dg’ on ‘lunardb1′
CRS-2676: Start of ‘ora.DATA_DG.dg’ on ‘lunardb1′ succeeded

lunardb1 2013/10/15 15:37:24 /u01/app/11.2.0/grid/cdata/lunardb1/backup_20131015_153724.olr
Configure Oracle Grid Infrastructure for a Cluster … succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 65536 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘UpdateNodeList’ was successful.
lunardb1/#

lunardb1/#ps -ef|grep d.bin
root 4456530 1 0 15:32:27 – 0:04 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid 4522138 1 0 15:35:22 – 0:00 /u01/app/11.2.0/grid/bin/gpnpd.bin
root 5046320 1 0 15:36:36 – 0:00 /u01/app/11.2.0/grid/bin/oclskd.bin
root 5242960 1 0 15:37:04 – 0:01 /u01/app/11.2.0/grid/bin/orarootagent.bin
root 2228696 1 0 15:35:26 – 0:00 /bin/sh /u01/app/11.2.0/grid/bin/ocssd
grid 3080542 1 0 15:35:18 – 0:00 /u01/app/11.2.0/grid/bin/oraagent.bin
root 3146224 1 0 15:35:26 – 0:00 /u01/app/11.2.0/grid/bin/cssdagent
grid 3211576 1 0 15:35:18 – 0:00 /u01/app/11.2.0/grid/bin/mdnsd.bin
grid 3408338 1 0 15:35:20 – 0:00 /u01/app/11.2.0/grid/bin/gipcd.bin
grid 3604800 2228696 1 15:35:26 – 0:01 /u01/app/11.2.0/grid/bin/ocssd.bin
grid 3998018 1 0 15:37:21 – 0:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
root 2556504 1 0 15:36:34 – 0:01 /u01/app/11.2.0/grid/bin/crsd.bin reboot
root 3015312 1 0 15:35:24 – 0:00 /u01/app/11.2.0/grid/bin/cssdmonitor
grid 3670680 1 0 15:35:26 – 0:00 /u01/app/11.2.0/grid/bin/diskmon.bin -d -f
grid 3998212 1 0 15:36:56 – 0:00 /u01/app/11.2.0/grid/bin/oraagent.bin
grid 4129292 1 0 15:36:36 – 0:00 /u01/app/11.2.0/grid/bin/evmd.bin
root 4325942 4128828 0 15:40:11 pts/0 0:00 grep d.bin
root 3605266 1 0 15:36:13 – 0:00 /u01/app/11.2.0/grid/bin/octssd.bin
grid 3802090 4129292 0 15:36:38 – 0:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o /u01/app/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/11.2.0/grid/evm/log/evmlogger.log
root 3867418 1 0 15:35:26 – 0:00 /u01/app/11.2.0/grid/bin/orarootagent.bin
grid 4260618 1 0 15:36:22 – 0:00 /u01/app/11.2.0/grid/bin/oclskd.bin
lunardb1/#
在第二个节点执行root.sh:
lunardb2/#/u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2013-10-15 15:39:01: Parsing the host name
2013-10-15 15:39:01: Checking for super user privileges
2013-10-15 15:39:01: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User grid has the required capabilities to run CSSD in realtime mode
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘system’..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node lunardb1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘lunardb2′
CRS-2676: Start of ‘ora.mdnsd’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘lunardb2′
CRS-2676: Start of ‘ora.gipcd’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘lunardb2′
CRS-2676: Start of ‘ora.gpnpd’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘lunardb2′
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘lunardb2′
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘lunardb2′
CRS-2676: Start of ‘ora.diskmon’ on ‘lunardb2′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.ctssd’ on ‘lunardb2′
CRS-2676: Start of ‘ora.ctssd’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘lunardb2′
CRS-2676: Start of ‘ora.asm’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.crsd’ on ‘lunardb2′
CRS-2676: Start of ‘ora.crsd’ on ‘lunardb2′ succeeded
CRS-2672: Attempting to start ‘ora.evmd’ on ‘lunardb2′
CRS-2676: Start of ‘ora.evmd’ on ‘lunardb2′ succeeded

lunardb2 2013/10/15 15:42:03 /u01/app/11.2.0/grid/cdata/lunardb2/backup_20131015_154203.olr
Configure Oracle Grid Infrastructure for a Cluster … succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 65536 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘UpdateNodeList’ was successful.
lunardb2/#
lunardb1:/#crsctl status res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA_DG.dg
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.asm
ONLINE ONLINE lunardb1 Started
ONLINE ONLINE lunardb2 Started
ora.eons
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.gsd
OFFLINE OFFLINE lunardb1
OFFLINE OFFLINE lunardb2
ora.net1.network
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.ons
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE lunardb1
ora.oc4j
1 OFFLINE OFFLINE
ora.lunardb1.vip
1 ONLINE ONLINE lunardb1
ora.lunardb2.vip
1 ONLINE ONLINE lunardb2
ora.scan1.vip
1 ONLINE ONLINE lunardb1
lunardb1:/#

安装过程中,比较考验RP的是远程拷贝文件的过程,grid大概需要15~30分钟,oracle大概需要30~50分钟,。。。。。。。。。

安装grid,远程copy文件,走public网络:
-topas_nmon–q=Quit————-Host=lunardb2——Refresh=2 secs—13:55.43————————————————————————————————————————————————————–+
| Network ———————————————————————————————————————————————————————————————————————————-|
|I/F Name Recv=KB/s Trans=KB/s packin packout insize outsize Peak->Recv TransKB |
| en4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 |
| en10 64651.8 901.2 46026.3 15377.6 1438.4 60.0 64651.8 901.2 |
| en11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 |
| lo0 0.1 0.1 1.0 1.0 52.0 52.0 0.3 0.3 |
| Total 63.1 0.9 in Mbytes/second Overflow=0 |
|I/F Name MTU ierror oerror collision Mbits/s Description |
| en4 1500 0 0 0 1024 Standard Ethernet Network Interface |
| en10 1500 0 0 0 1024 Standard Ethernet Network Interface |
| en11 1500 0 0 0 1024 Standard Ethernet Network Interface |
| lo0 16896 0 0 0 0 Loopback Network Interface |
|——————————————————————————————————————————————————————————————–
安装oracle,远程copy文件,两个网络都走:
+-topas_nmon–d=Disk-Graph——-Host=lunardb2——Refresh=2 secs—14:58.48————————————————————————————————————————————————————–+
| Network ———————————————————————————————————————————————————————————————————————————-|
|I/F Name Recv=KB/s Trans=KB/s packin packout insize outsize Peak->Recv TransKB |
| en4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 |
| en10 0.7 0.9 10.5 10.5 69.1 89.3 1999.3 10872.3 |
| en11 68.2 99.9 65.0 88.5 1073.8 1155.7 95969.5 138985.1 |
| lo0 4.8 4.8 3.0 3.0 1627.3 1627.3 6760.4 6760.4 |
| Total 0.1 0.1 in Mbytes/second Overflow=0 |
|I/F Name MTU ierror oerror collision Mbits/s Description |
| en4 1500 0 0 0 1024 Standard Ethernet Network Interface |
| en10 1500 0 0 0 1024 Standard Ethernet Network Interface |
| en11 1500 0 0 0 1024 Standard Ethernet Network Interface |
| lo0 16896 0 0 0 0 Loopback Network Interface |
|———————————————————————————-
+-topas_nmon–l=LongTerm-CPU—–Host=lunardb2——Refresh=2 secs—15:23.34————————————————————————————————————————————————————–+
| Network ———————————————————————————————————————————————————————————————————————————-|
|I/F Name Recv=KB/s Trans=KB/s packin packout insize outsize Peak->Recv TransKB |
| en4 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 |
| en10 18291.5 257.9 13041.3 4391.8 1436.2 60.1 103965.8 10872.3 |
| en11 2.0 0.8 5.0 4.0 417.1 198.2 95969.5 138985.1 |
| lo0 0.2 0.2 3.0 3.0 52.7 52.7 6760.4 6760.4 |
| Total 17.9 0.3 in Mbytes/second Overflow=0 |
|I/F Name MTU ierror oerror collision Mbits/s Description |
| en4 1500 0 0 0 1024 Standard Ethernet Network Interface |
| en10 1500 0 0 0 1024 Standard Ethernet Network Interface |
| en11 1500 0 0 0 1024 Standard Ethernet Network Interface |
| lo0 16896 0 0 0 0 Loopback Network Interface |
|—————————————————————————————–
配置环境时,11.2.0.4的check比较严格,比较好的做法是规范化环境,逐个排除问题,设置好环境(主要还是数组,权限,一些安全限制等等比较多)
如果环境特殊,也可以考虑使用11.2.0.1绕过去作为一个workround,O(∩_∩)O哈哈~

lunardb1:/tmp/oracle#/u01/app/oracle/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
lunardb1:/tmp/oracle#

lunardb2:/u01#du -ks
18895048 .
lunardb2:/u01#/u01/app/oracle/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
lunardb2:/u01#
lunardb1:/tmp/oracle#crsctl status res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA_DG.dg
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.LISTENER.lsnr
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.asm
ONLINE ONLINE lunardb1 Started
ONLINE ONLINE lunardb2 Started
ora.gsd
OFFLINE OFFLINE lunardb1
OFFLINE OFFLINE lunardb2
ora.net1.network
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.ons
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.registry.acfs
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE lunardb1
ora.cvu
1 ONLINE ONLINE lunardb1
ora.oc4j
1 ONLINE ONLINE lunardb1
ora.lunardb1.vip
1 ONLINE ONLINE lunardb1
ora.lunardb2.vip
1 ONLINE ONLINE lunardb2
ora.scan1.vip
1 ONLINE ONLINE lunardb1
lunardb1:/tmp/oracle#
lunardb1/home/grid>lsnrctl status

LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.4.0 – Production on 16-OCT-2013 16:14:57

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
————————
Alias LISTENER
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.4.0 – Production
Start Date 16-OCT-2013 14:28:11
Uptime 0 days 1 hr. 46 min. 48 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/lunardb1/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.12.110)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.12.112)(PORT=1521)))
Services Summary…
Service “+ASM” has 1 instance(s).
Instance “+ASM1″, status READY, has 1 handler(s) for this service…
Service “qgdb” has 1 instance(s).
Instance “qgdb1″, status READY, has 1 handler(s) for this service…
Service “qgdbXDB” has 1 instance(s).
Instance “qgdb1″, status READY, has 1 handler(s) for this service…
The command completed successfully
lunardb1/home/grid>
lunardb1/home/grid>crsctl status res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA_DG.dg
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.LISTENER.lsnr
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.RECO_DG.dg
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.asm
ONLINE ONLINE lunardb1 Started
ONLINE ONLINE lunardb2 Started
ora.gsd
OFFLINE OFFLINE lunardb1
OFFLINE OFFLINE lunardb2
ora.net1.network
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.ons
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.registry.acfs
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE lunardb1
ora.cvu
1 ONLINE ONLINE lunardb1
ora.oc4j
1 ONLINE ONLINE lunardb1
ora.lunardb1.vip
1 ONLINE ONLINE lunardb1
ora.lunardb2.vip
1 ONLINE ONLINE lunardb2
ora.qgdb.db
1 OFFLINE OFFLINE
2 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE lunardb1
lunardb1/home/grid>
lunardb2/home/grid>lsnrctl status

LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.4.0 – Production on 16-OCT-2013 16:15:58

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
————————
Alias LISTENER
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.4.0 – Production
Start Date 16-OCT-2013 14:28:10
Uptime 0 days 1 hr. 47 min. 49 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/lunardb2/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.12.111)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.12.113)(PORT=1521)))
Services Summary…
Service “+ASM” has 1 instance(s).
Instance “+ASM2″, status READY, has 1 handler(s) for this service…
The command completed successfully
lunardb2/home/grid>
lunardb1:/#crsctl status res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA_DG.dg
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.LISTENER.lsnr
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.RECO_DG.dg
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.asm
ONLINE ONLINE lunardb1 Started
ONLINE ONLINE lunardb2 Started
ora.gsd
OFFLINE OFFLINE lunardb1
OFFLINE OFFLINE lunardb2
ora.net1.network
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.ons
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
ora.registry.acfs
ONLINE ONLINE lunardb1
ONLINE ONLINE lunardb2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE lunardb1
ora.cvu
1 ONLINE ONLINE lunardb1
ora.oc4j
1 ONLINE ONLINE lunardb1
ora.lunardb1.vip
1 ONLINE ONLINE lunardb1
ora.lunardb2.vip
1 ONLINE ONLINE lunardb2
ora.qgdb.db
1 ONLINE ONLINE lunardb1 Open
2 ONLINE ONLINE lunardb2 Open
ora.scan1.vip
1 ONLINE ONLINE lunardb1
lunardb1:/#
lunardb1/home/grid>lsnrctl

LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.4.0 – Production on 17-OCT-2013 10:09:08

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
————————
Alias LISTENER
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.4.0 – Production
Start Date 16-OCT-2013 14:28:11
Uptime 0 days 19 hr. 41 min. 1 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/lunardb1/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.12.110)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.12.112)(PORT=1521)))
Services Summary…
Service “+ASM” has 1 instance(s).
Instance “+ASM1″, status READY, has 1 handler(s) for this service…
Service “qgdb” has 1 instance(s).
Instance “qgdb1″, status READY, has 1 handler(s) for this service…
Service “qgdbXDB” has 1 instance(s).
Instance “qgdb1″, status READY, has 1 handler(s) for this service…
The command completed successfully
LSNRCTL> service
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
Services Summary…
Service “+ASM” has 1 instance(s).
Instance “+ASM1″, status READY, has 1 handler(s) for this service…
Handler(s):
“DEDICATED” established:1304 refused:0 state:ready
LOCAL SERVER
Service “qgdb” has 1 instance(s).
Instance “qgdb1″, status READY, has 1 handler(s) for this service…
Handler(s):
“DEDICATED” established:1240 refused:0 state:ready
LOCAL SERVER
Service “qgdbXDB” has 1 instance(s).
Instance “qgdb1″, status READY, has 1 handler(s) for this service…
Handler(s):
“D000″ established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER <machine: lunardb1, pid: 6029852>
(ADDRESS=(PROTOCOL=tcp)(HOST=lunardb1)(PORT=13791))
The command completed successfully
LSNRCTL>
gisqgdb1:/#su – oracle
lunardb1/home/oracle>cd $ORACLE_HOME
lunardb1/u01/app/oracle>cd network
lunardb1/u01/app/oracle/network>cd admin
lunardb1/u01/app/oracle/network/admin>ls
samples shrept.lst tnsnames.ora
lunardb1/u01/app/oracle/network/admin>cat tn*
# tnsnames.ora Network Configuration File: /u01/app/oracle/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

QGDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = lunarqg-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = qgdb)
)
)

lunardb1/u01/app/oracle/network/admin>sqlplus system/oracle@qgdb

SQL*Plus: Release 11.2.0.4.0 Production on Thu Oct 17 10:31:23 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
lunardb1/u01/app/oracle/network/admin>

常见问题:
SSH-问题1:
====================
INFO: Unable to find /usr/local/bin/ssh-keygen on node: pgisqgdb1
INFO: Home Dir /home/grid
INFO: Lock Location : /home/grid/.ssh/lock
INFO: Releasing Lock…
INFO: Lock Released
解决方法:设置目录连接,文档表述如下:
By default, OUI searches for SSH public keys in the directory /usr/local/etc/, and
ssh-keygen binaries in /usr/local/bin. However, on AIX, SSH public keys
typically are located in the path /etc/ssh, and ssh-keygen binaries are located in
the path /usr/bin. To ensure that OUI can set up SSH, use the following command to
create soft links:
# ln -s /etc/ssh /usr/local/etc
# ln -s /usr/bin /usr/local/bin
pgisqgdb2/#which ssh
/usr/bin/ssh
pgisqgdb2/#which scp
/usr/bin/scp
pgisqgdb2/#ls /usr/bin/ssh-keygen
/usr/bin/ssh-keygen
pgisqgdb2/#

pgisqgdb1/#mkdir -p /usr/local
pgisqgdb1/#ln -s /etc/ssh /usr/local/etc
pgisqgdb1/#ln -s /usr/bin /usr/local/bin
pgisqgdb1/#cd /usr/local
pgisqgdb1/usr/local#ls
bin etc
pgisqgdb1/usr/local#
SSH问题2:
==========================================
INFO: Lock Retry Count 120
INFO: Lock Sleep Time 30000
INFO: Home Dir /home/grid
INFO: Lock Location : /home/grid/.ssh/lock
INFO: Trying to get Lock ….
INFO: Lock Acquired
INFO: LIBRARY_LOC = /tmp/OraInstall2013-10-15_02-07-32PM/oui/lib/aix
INFO: Validating remote binaries..
INFO: [pgisqgdb1]
INFO: /bin/bash -c ‘/bin/true’
INFO: Exit-status: 0
INFO: Error:
INFO:
INFO:
INFO: [pgisqgdb1]
INFO: /bin/bash -c ‘if [[ -f /etc/ssh/ssh_host_rsa_key.pub ]] ; then exit 0; else exit 1; fi’
INFO: Exit-status: 0
INFO: Error:
INFO:
INFO:
INFO: [pgisqgdb1]
INFO: /bin/bash -c ‘if [[ -f /usr/local/bin/ssh-keygen ]] ; then exit 0; else exit 1; fi’
INFO: Exit-status: 0
INFO: Error:
INFO:
INFO:
INFO: [pgisqgdb2]
INFO: /bin/bash -c ‘/bin/true’
INFO: Exit-status: 127
INFO: Error: ksh: /bin/bash: not found.

INFO:
INFO:
INFO: Existence check failed for /bin/bash on node: pgisqgdb2
INFO: Home Dir /home/grid
INFO: Lock Location : /home/grid/.ssh/lock
INFO: Releasing Lock…
INFO: Lock Released

解决方法:
安装bash包
如果使用11.2.0.4会提示一些很无聊的错误,比如,你的udp设置比文档大,文档是65535,你设置了128k,那么也会有warning,还以oracle的脚本写的有问题(11.2.0.1没这个问题)。。。。。。忽略之。。。

此条目发表在 Installation and Deinstall, RAC 分类目录,贴了 , , 标签。将固定链接加入收藏夹。

发表评论

电子邮件地址不会被公开。 必填项已用 * 标注

您可以使用这些 HTML 标签和属性: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>