CentOS7.2(RHEL 7.2)的CPU占用高(%system 占用高)

开机后,发现节点1的CPU使用率很高:

[root@lunar1 ~]# sar -u 1 10
Linux 3.10.0-327.el7.x86_64 (lunar1.oracle.com)         03/22/2016      _x86_64_        (2 CPU)

06:16:57 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle
06:16:58 PM     all      0.55      0.00     78.14      0.00      0.00     21.31
06:16:59 PM     all      0.00      0.00     73.91      0.00      0.00     26.09
06:17:00 PM     all      0.52      0.00     81.68      0.00      0.00     17.80
06:17:01 PM     all      0.00      0.00     82.81      0.00      0.00     17.19
06:17:02 PM     all      0.00      0.00     84.13      0.00      0.00     15.87
06:17:03 PM     all      0.55      0.00     70.72      0.00      0.00     28.73
^C

06:17:03 PM     all      0.00      0.00     57.14      0.00      0.00     42.86
Average:        all      0.26      0.00     77.88      0.00      0.00     21.86
[root@lunar1 ~]# 

节点2的也比较高:

[root@lunar2 ~]# sar -u 1 10
Linux 3.10.0-327.el7.x86_64 (lunar2.oracle.com)         03/22/2016      _x86_64_        (2 CPU)

06:17:20 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle
06:17:21 PM     all      0.50      0.00      0.50      0.00      0.00     99.00
06:17:22 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:17:23 PM     all     55.44      0.00      6.74      0.00      0.00     37.82
06:17:24 PM     all     59.79      0.00     11.34      0.00      0.00     28.87
06:17:25 PM     all     38.38      0.00     28.11      0.54      0.00     32.97
06:17:26 PM     all      8.67      0.00      8.67      0.00      0.00     82.65
06:17:27 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:17:28 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:17:29 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:17:30 PM     all      0.50      0.00      1.00      0.00      0.00     98.51
Average:        all     15.90      0.00      5.43      0.05      0.00     78.62
[root@lunar2 ~]# 

在节点1检查哪些进程使用CPU资源较多:

Tasks: 287 total,   4 running, 283 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.4 us, 80.8 sy,  0.0 ni, 18.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  3145636 total,  1630448 free,  1073504 used,   441684 buff/cache
KiB Swap:  4190204 total,  4190204 free,        0 used.  2000056 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                     
 3205 oracle    20   0  214120   1828   1328 R  79.1  0.1   3:40.27 /usr/bin/VBoxClient --clipboard                                                                                             
 3208 root      20   0  222392   1844   1312 R  78.7  0.1   3:40.10 /usr/bin/VBoxClient --clipboard                                                                                             
 3279 oracle    20   0  147044   1292    884 S   0.3  0.0   0:00.26 /usr/bin/VBoxClient --draganddrop                                                                                           
 3850 grid      20   0  322112   6092   4604 S   0.3  0.2   0:00.04 /usr/libexec/goa-identity-service                                                                                           
 7179 root      20   0  146308   2336   1520 R   0.3  0.1   0:00.02 top -c                                                                                                                      
    1 root      20   0  189244   4316   2380 S   0.0  0.1   0:01.54 /usr/lib/systemd/systemd --switched-root --system --deserialize 22                                                          
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 [kthreadd]                                                                                                                  
    3 root      20   0       0      0      0 S   0.0  0.0   0:00.15 [ksoftirqd/0]                                                                                                               
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 [kworker/0:0H]                                                                                                              
    7 root      rt   0       0      0      0 S   0.0  0.0   0:00.38 [migration/0]                                                                                                               
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 [rcu_bh]                                  

检查果然vnc都自动开启:

[root@lunar1 ~]# systemctl status vncserver@:1.service
● vncserver@:1.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:1.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2016-03-22 18:12:24 CST; 6min ago
  Process: 1420 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :1 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
 Main PID: 1832 (Xvnc)
   CGroup: /system.slice/system-vncserver.slice/vncserver@:1.service
           ‣ 1832 /usr/bin/Xvnc :1 -desktop lunar1.oracle.com:1 (root) -auth /root/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /root/.vnc/passwd -rfbport 5901 -fp catalogue:/etc/...

Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Started Remote desktop service (VNC).
[root@lunar1 ~]# 
[root@lunar1 ~]# systemctl status vncserver@:2.service
● vncserver@:2.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:2.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2016-03-22 18:12:24 CST; 6min ago
  Process: 1425 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :2 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
 Main PID: 1833 (Xvnc)
   CGroup: /system.slice/system-vncserver.slice/vncserver@:2.service
           ‣ 1833 /usr/bin/Xvnc :2 -desktop lunar1.oracle.com:2 (grid) -auth /home/grid/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/grid/.vnc/passwd -rfbport 5902 -fp catal...

Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Started Remote desktop service (VNC).
[root@lunar1 ~]# 
[root@lunar1 ~]# systemctl status vncserver@:3.service
● vncserver@:3.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:3.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2016-03-22 18:12:24 CST; 6min ago
  Process: 1423 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :3 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
 Main PID: 1828 (Xvnc)
   CGroup: /system.slice/system-vncserver.slice/vncserver@:3.service
           ‣ 1828 /usr/bin/Xvnc :3 -desktop lunar1.oracle.com:3 (oracle) -auth /home/oracle/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/oracle/.vnc/passwd -rfbport 5903 -fp...

Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Started Remote desktop service (VNC).
[root@lunar1 ~]# 

停止vnc并设置开机不启动:

[root@lunar1 ~]# systemctl stop  vncserver@:1.service
[root@lunar1 ~]# systemctl stop  vncserver@:2.service
[root@lunar1 ~]# systemctl stop  vncserver@:3.service
[root@lunar1 ~]# systemctl disable vncserver@:1.service
Removed symlink /etc/systemd/system/multi-user.target.wants/vncserver@:1.service.
[root@lunar1 ~]# systemctl disable vncserver@:2.service
Removed symlink /etc/systemd/system/multi-user.target.wants/vncserver@:2.service.
[root@lunar1 ~]# systemctl disable vncserver@:3.service
Removed symlink /etc/systemd/system/multi-user.target.wants/vncserver@:3.service.
[root@lunar1 ~]# systemctl status vncserver@:1.service
● vncserver@:1.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:1.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Started Remote desktop service (VNC).
Mar 22 18:20:48 lunar1.oracle.com systemd[1]: Stopping Remote desktop service (VNC)...
Mar 22 18:20:49 lunar1.oracle.com systemd[1]: Stopped Remote desktop service (VNC).
[root@lunar1 ~]# systemctl status vncserver@:2.service
● vncserver@:2.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:2.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Started Remote desktop service (VNC).
Mar 22 18:20:53 lunar1.oracle.com systemd[1]: Stopping Remote desktop service (VNC)...
Mar 22 18:20:53 lunar1.oracle.com systemd[1]: Stopped Remote desktop service (VNC).
[root@lunar1 ~]# systemctl status vncserver@:3.service
● vncserver@:3.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:3.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:24 lunar1.oracle.com systemd[1]: Started Remote desktop service (VNC).
Mar 22 18:20:57 lunar1.oracle.com systemd[1]: Stopping Remote desktop service (VNC)...
Mar 22 18:20:57 lunar1.oracle.com systemd[1]: Stopped Remote desktop service (VNC).
[root@lunar1 ~]# 

优化效果非常明显,可以看到CPU基本空闲了,符合现在的情况(Oracle的CRS还没启动呢):

[root@lunar1 ~]# sar -u 1 10
Linux 3.10.0-327.el7.x86_64 (lunar1.oracle.com)         03/22/2016      _x86_64_        (2 CPU)

06:21:47 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle
06:21:48 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:21:49 PM     all      0.00      0.00      0.50      0.00      0.00     99.50
06:21:50 PM     all      1.01      0.00      1.52      0.00      0.00     97.47
06:21:51 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:21:53 PM     all      0.50      0.00      0.50      0.00      0.00     99.00
06:21:54 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:21:55 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:21:56 PM     all      1.51      0.00      3.52      0.00      0.00     94.97
06:21:57 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:21:58 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
Average:        all      0.30      0.00      0.60      0.00      0.00     99.10
[root@lunar1 ~]# 

再来看看节点2:

Tasks: 284 total,   2 running, 282 sleeping,   0 stopped,   0 zombie
%Cpu(s):  4.3 us,  0.7 sy,  0.0 ni, 95.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  3145636 total,  1645972 free,  1070152 used,   429512 buff/cache
KiB Swap:  4190204 total,  4190204 free,        0 used.  2003876 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                     
 3643 grid      20   0 1442740 154600  44292 S   3.3  4.9   0:06.48 /usr/bin/gnome-shell                                                                                                        
 3651 oracle    20   0 1593832 154216  44296 S   3.0  4.9   0:06.94 /usr/bin/gnome-shell                                                                                                        
 3520 root      20   0 1580060 154488  43804 S   2.7  4.9   0:06.18 /usr/bin/gnome-shell                                                                                                        
   12 root      20   0       0      0      0 S   0.3  0.0   0:00.30 [rcuos/0]                                                                                                                   
 1023 root      20   0 1721880 153028   9952 S   0.3  4.9   0:09.38 /u01/app/12.1.0.2/grid/jdk/jre/bin/java -Xms128m -Xmx512m -classpath /u01/app/12.1.0.2/grid/tfa/lunar2/tfa_home/jlib/RATFA+ 
 1873 oracle    20   0  222688  24820   8192 S   0.3  0.8   0:00.22 /usr/bin/Xvnc :3 -desktop lunar2.oracle.com:3 (oracle) -auth /home/oracle/.Xauthority -geometry 1024x768 -rfbwait 30000 -r+ 
 3299 root      20   0  220832   1340    892 S   0.3  0.0   0:00.31 /usr/bin/VBoxClient --draganddrop                                                                                           
 7208 root      20   0  146308   2316   1512 R   0.3  0.1   0:00.11 top -c                                                                                                                      
    1 root      20   0  189236   4292   2380 S   0.0  0.1   0:01.56 /usr/lib/systemd/systemd --switched-root --system --deserialize 21                                                          
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 [kthreadd]                                                                                                                  
    3 root      20   0       0      0      0 S   0.0  0.0   0:00.02 [ksoftirqd/0]                                                                                                               
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 [kworker/0:0H]                                                                                                              
    7 root      rt   0       0      0      0 S   0.0  0.0   0:00.67 [migration/0]                                                                                                               
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 [rcu_bh]                                                                                                                    
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.00 [rcuob/0]                                                                                                                   
   10 root      20   0       0      0      0 S   0.0  0.0   0:00.00 [rcuob/1]                                                                                                                   
   11 root      20   0       0      0      0 R   0.0  0.0   0:00.47 [rcu_sched]                         

同样的方法,也停止vnc服务和vnc的开机自动启动:

[root@lunar2 ~]# systemctl status vncserver@:1.service
● vncserver@:1.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:1.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2016-03-22 18:12:23 CST; 10min ago
  Process: 1470 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :1 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
 Main PID: 1872 (Xvnc)
   CGroup: /system.slice/system-vncserver.slice/vncserver@:1.service
           ‣ 1872 /usr/bin/Xvnc :1 -desktop lunar2.oracle.com:1 (root) -auth /root/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /root/.vnc/passwd -rfbport 5901 -fp catalogue:/etc/...

Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Started Remote desktop service (VNC).
[root@lunar2 ~]# systemctl status vncserver@:2.service
● vncserver@:2.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:2.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2016-03-22 18:12:23 CST; 10min ago
  Process: 1469 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :2 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
 Main PID: 1869 (Xvnc)
   CGroup: /system.slice/system-vncserver.slice/vncserver@:2.service
           ‣ 1869 /usr/bin/Xvnc :2 -desktop lunar2.oracle.com:2 (grid) -auth /home/grid/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/grid/.vnc/passwd -rfbport 5902 -fp catal...

Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Started Remote desktop service (VNC).
[root@lunar2 ~]# systemctl status vncserver@:3.service
● vncserver@:3.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:3.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2016-03-22 18:12:23 CST; 10min ago
  Process: 1474 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :3 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
 Main PID: 1873 (Xvnc)
   CGroup: /system.slice/system-vncserver.slice/vncserver@:3.service
           ‣ 1873 /usr/bin/Xvnc :3 -desktop lunar2.oracle.com:3 (oracle) -auth /home/oracle/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/oracle/.vnc/passwd -rfbport 5903 -fp...

Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Started Remote desktop service (VNC).
[root@lunar2 ~]# 

停止vnc服务和进制vnc开机自动启动:

[root@lunar2 ~]# systemctl disable vncserver@:1.service
systemctl stop  vncserver@:1.service
systemctl status vncserver@:1.service
Removed symlink /etc/systemd/system/multi-user.target.wants/vncserver@:1.service.
[root@lunar2 ~]# systemctl stop  vncserver@:1.service
[root@lunar2 ~]# systemctl status vncserver@:1.service
● vncserver@:1.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:1.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Started Remote desktop service (VNC).
Mar 22 18:23:23 lunar2.oracle.com systemd[1]: Stopping Remote desktop service (VNC)...
Mar 22 18:23:23 lunar2.oracle.com systemd[1]: Stopped Remote desktop service (VNC).
[root@lunar2 ~]# systemctl disable vncserver@:2.service
systemctl stop  vncserver@:2.service
systemctl status vncserver@:2.service
Removed symlink /etc/systemd/system/multi-user.target.wants/vncserver@:2.service.
[root@lunar2 ~]# systemctl stop  vncserver@:2.service
[root@lunar2 ~]# systemctl status vncserver@:2.service
● vncserver@:2.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:2.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Started Remote desktop service (VNC).
Mar 22 18:23:28 lunar2.oracle.com systemd[1]: Stopping Remote desktop service (VNC)...
Mar 22 18:23:28 lunar2.oracle.com systemd[1]: Stopped Remote desktop service (VNC).
[root@lunar2 ~]# 
[root@lunar2 ~]# systemctl disable vncserver@:3.service
systemctl stop  vncserver@:3.service
systemctl status vncserver@:3.service

Removed symlink /etc/systemd/system/multi-user.target.wants/vncserver@:3.service.
[root@lunar2 ~]# systemctl stop  vncserver@:3.service
[root@lunar2 ~]# systemctl status vncserver@:3.service
● vncserver@:3.service - Remote desktop service (VNC)
   Loaded: loaded (/usr/lib/systemd/system/vncserver@:3.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Starting Remote desktop service (VNC)...
Mar 22 18:12:23 lunar2.oracle.com systemd[1]: Started Remote desktop service (VNC).
Mar 22 18:23:32 lunar2.oracle.com systemd[1]: Stopping Remote desktop service (VNC)...
Mar 22 18:23:32 lunar2.oracle.com systemd[1]: Stopped Remote desktop service (VNC).
[root@lunar2 ~]# 
[root@lunar2 ~]# 

看一下节点2的优化效果:

[root@lunar2 ~]# sar -u 1 10
Linux 3.10.0-327.el7.x86_64 (lunar2.oracle.com)         03/22/2016      _x86_64_        (2 CPU)

06:24:01 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle
06:24:02 PM     all      0.51      0.00      0.51      0.00      0.00     98.99
06:24:03 PM     all      0.50      0.00      1.01      0.00      0.00     98.49
06:24:04 PM     all      0.00      0.00      0.50      0.00      0.00     99.50
06:24:05 PM     all      0.44      0.00      0.00      0.00      0.00     99.56
06:24:06 PM     all      0.00      0.00      0.50      0.00      0.00     99.50
06:24:07 PM     all      0.50      0.00      1.51      0.00      0.00     97.99
06:24:08 PM     all      0.50      0.00      0.50      0.00      0.00     99.00
06:24:09 PM     all      0.00      0.00      0.50      0.00      0.00     99.50
06:24:10 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
06:24:11 PM     all      0.00      0.00      0.00      0.00      0.00    100.00
Average:        all      0.25      0.00      0.50      0.00      0.00     99.26
[root@lunar2 ~]# 
发表在 Linux | 标签为 , , , , | 留下评论

Oracle 12.1 RAC 系列 – 配置第二个网络和相应的SCAN2

在配置ADG或者使用oracle 的集群管理应用的HA时(比如OGG),我们可能希望使用不同的网络,以避免ADG传输日志等对主生产网络的造成影响。
从11.2开始,我们可以使用crs管理多个网络资源(缺省只有network1),但是SCAN只能在多个网络中的一个上活动(缺省是network1,后续可以指定到不同网络上)。
然后,我们通常会配置专门为ADG传输日志的network2网络,但是在配置连接串时,只能使用vip(因为SCAN通常给主生产上的network1使用)。
.
从12.1开始,我们可以配置多个网络上的多个SCAN,比如我们配置ADG时,在network2上配置SCAN2。
具体配置如下:
–检查网卡接口对应的IP地址:

[root@rac1 ~]# ifconfig | egrep 'eth0' -A 1 | grep 'inet ' | cut -d : -f2 | cut -d ' ' -f1
192.168.56.61
192.168.56.63
[root@rac1 ~]# ifconfig | egrep 'eth2' -A 1 | grep 'inet ' | cut -d : -f2 | cut -d ' ' -f1
192.168.209.61
[root@rac1 ~]# 

[root@rac2 ~]# ifconfig | egrep 'eth0' -A 1 | grep 'inet ' | cut -d : -f2 | cut -d ' ' -f1
192.168.56.62
192.168.56.64
192.168.56.65
[root@rac2 ~]# ifconfig | egrep 'eth2' -A 1 | grep 'inet ' | cut -d : -f2 | cut -d ' ' -f1
192.168.209.62
[root@rac2 ~]# 

–添加新的public网络

[grid@rac1 ~]$ oifcfg getif
eth0  192.168.56.0  global  public
eth1  192.168.60.0  global  cluster_interconnect
[grid@rac1 ~]$ 
[grid@rac1 ~]$ oifcfg setif -global eth2/192.168.209.0:public
[grid@rac1 ~]$ oifcfg getif
eth0  192.168.56.0  global  public
eth1  192.168.60.0  global  cluster_interconnect
eth2  192.168.209.0  global  public
[grid@rac1 ~]$ 

–检查网络定义,缺省只有一个网络定义:network1

[grid@rac1 ~]$ srvctl config network
Network 1 exists
Subnet IPv4: 192.168.56.0/255.255.255.0/eth0, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
[grid@rac1 ~]$ 

–添加新的网络集群资源(a new network cluster resource)

[root@rac1 ~]# ipcalc -bnm 192.168.209.61 255.255.255.0
NETMASK=255.255.255.0
BROADCAST=192.168.209.255
NETWORK=192.168.209.0
[root@rac1 ~]# 
[root@rac1 ~]# srvctl add network -netnum 2 -subnet 192.168.209.0/255.255.255.0/eth2
[root@rac1 ~]# 
[root@rac1 ~]# srvctl config network
Network 1 exists
Subnet IPv4: 192.168.56.0/255.255.255.0/eth0, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
Network 2 exists
Subnet IPv4: 192.168.209.0/255.255.255.0/eth2, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
[root@rac1 ~]# 

这时集群的网络资源中已经配置了两个网络(包括新增加的网络),如果使用“crsctl status res -t”查看,可以看到:

[root@rac1 ~]# crsctl status res -t |grep -A 2 network
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net2.network
               OFFLINE OFFLINE      rac1                     STABLE
               OFFLINE OFFLINE      rac2                     STABLE
[root@rac1 ~]# 

–添加vip

[root@rac1 ~]# srvctl add vip -node rac1 -netnum 2 -address rac1-adg-vip/255.255.255.0/eth2
[root@rac1 ~]# srvctl add vip -node rac2 -netnum 2 -address rac2-adg-vip/255.255.255.0/eth2
[root@rac1 ~]# srvctl config vip -n rac1
VIP exists: network number 1, hosting node rac1
VIP Name: rac1-vip
VIP IPv4 Address: 192.168.56.63
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 2, hosting node rac1
VIP Name: rac1-adg-vip
VIP IPv4 Address: 192.168.209.63
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@rac1 ~]# srvctl config vip -n rac2
VIP exists: network number 1, hosting node rac2
VIP Name: rac2-vip
VIP IPv4 Address: 192.168.56.64
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 2, hosting node rac2
VIP Name: rac2-adg-vip
VIP IPv4 Address: 192.168.209.64
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@rac1 ~]# crsctl status res -t |grep -A 2 network
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net2.network
               OFFLINE OFFLINE      rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
[root@rac1 ~]# crsctl status res -t |grep -A 2 vip
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1_2.vip
      1        OFFLINE OFFLINE                               STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac2_2.vip
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
[root@rac1 ~]# 

–启动vip,查看vip资源

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ srvctl start vip -vip rac1-adg-vip
[grid@rac1 ~]$ srvctl start vip -vip rac2-adg-vip
[grid@rac1 ~]$ srvctl status vip -n rac1
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
VIP rac1-adg-vip is enabled
VIP rac1-adg-vip is running on node: rac1
[grid@rac1 ~]$ srvctl status vip -n rac2
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
VIP rac2-adg-vip is enabled
VIP rac2-adg-vip is running on node: rac2
[grid@rac1 ~]$ 
[grid@rac1 ~]$ crsctl status res -t |grep -A 2 network
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net2.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
[grid@rac1 ~]$ crsctl status res -t |grep -A 2 vip
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1_2.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac2_2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
[grid@rac1 ~]$ 

–检查新创建的vip是否运行了:

[grid@rac1 ~]$ /sbin/ifconfig eth2:1
eth2:1    Link encap:Ethernet  HWaddr 08:00:27:17:79:3C  
          inet addr:192.168.209.63  Bcast:192.168.209.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

[grid@rac1 ~]$ 
[root@rac2 ~]# /sbin/ifconfig eth2:1
eth2:1    Link encap:Ethernet  HWaddr 08:00:27:74:C7:94  
          inet addr:192.168.209.64  Bcast:192.168.209.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

[root@rac2 ~]# 

–添加网络2上的监听:

[grid@rac1 ~]$ srvctl add listener -listener LISTENER_ADG -netnum 2 -endpoints "TCP:1522"
[grid@rac1 ~]$ crsctl status res -t |grep -A 2 LISTENER
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_ADG.lsnr
               OFFLINE OFFLINE      rac1                     STABLE
               OFFLINE OFFLINE      rac2                     STABLE
--
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
[grid@rac1 ~]$ 

在network2上配置SCAN:

[root@rac1 ~]# srvctl add scan -scanname racadg-scan -netnum 2
[root@rac1 ~]# srvctl config scan -netnum 2
SCAN name: racadg-scan, Network: 2
Subnet IPv4: 192.168.209.0/255.255.255.0/eth2, static
Subnet IPv6: 
SCAN 0 IPv4 VIP: 192.168.209.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
[root@rac1 ~]# srvctl status scan -netnum 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is not running
[root@rac1 ~]# crsctl status res -t |grep -A 2 LISTENER
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_ADG.lsnr
               OFFLINE OFFLINE      rac1                     STABLE
               OFFLINE OFFLINE      rac2                     STABLE
--
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
[root@rac1 ~]# crsctl status res -t |grep -A 1 SCAN
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
[root@rac1 ~]#  

–启动network2上的监听

[grid@rac1 ~]$ srvctl start listener -listener LISTENER_ADG
[grid@rac1 ~]$ srvctl status listener -listener LISTENER_ADG
Listener LISTENER_ADG is enabled
Listener LISTENER_ADG is running on node(s): rac1,rac2
[grid@rac1 ~]$ srvctl status listener -listener LISTENER
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
[grid@rac1 ~]$ 
[grid@rac1 ~]$ crsctl status res -t |grep -A 2 LISTENER
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_ADG.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
[grid@rac1 ~]$ crsctl status res -t |grep -A 2 SCAN
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
[grid@rac1 ~]$ 

–在network2上启动SCAN

[root@rac1 ~]# srvctl start scan -netnum 2
[root@rac1 ~]# srvctl config scan -netnum 2
SCAN name: racadg-scan, Network: 2
Subnet IPv4: 192.168.209.0/255.255.255.0/eth2, static
Subnet IPv6: 
SCAN 0 IPv4 VIP: 192.168.209.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
[root@rac1 ~]# srvctl status scan -netnum 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node rac1
[root@rac1 ~]# crsctl status res -t |grep -A 2 LISTENER
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_ADG.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
[root@rac1 ~]# crsctl status res -t |grep -A 2 SCAN
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
[root@rac1 ~]# 

–在network2上添加SCAN LISTENER

[grid@rac1 ~]$ srvctl add scan_listener -netnum 2 -listener racadg-scanlsnr -endpoints "TCP:1522"
[grid@rac1 ~]$ srvctl start scan_listener -netnum 2
[grid@rac1 ~]$ srvctl status scan_listener -netnum 2
SCAN Listener RACADG-SCANLSNR_SCAN1_NET2 is enabled
SCAN listener RACADG-SCANLSNR_SCAN1_NET2 is running on node rac1
[grid@rac1 ~]$ srvctl config scan_listener -netnum 2
SCAN Listener RACADG-SCANLSNR_SCAN1_NET2 exists. Port: TCP:1522
Registration invited nodes: 
Registration invited subnets: 
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes: 
SCAN Listener is individually disabled on nodes: 
[grid@rac1 ~]$ crsctl status res -t |grep -A 2 LISTENER
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_ADG.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
[grid@rac1 ~]$ crsctl status res -t |grep -A 2 SCAN
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
--
ora.RACADG-SCANLSNR_SCAN1_NET2.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.cvu
[grid@rac1 ~]$ 

–检查监听状态

[grid@rac1 ~]$ netstat -an |grep 1521|grep LISTEN
tcp        0      0 169.254.242.79:1521         0.0.0.0:*                   LISTEN      
tcp        0      0 192.168.60.61:1521          0.0.0.0:*                   LISTEN      -----rac1-priv
tcp        0      0 192.168.56.63:1521          0.0.0.0:*                   LISTEN      -----rac1-vip
tcp        0      0 192.168.56.61:1521          0.0.0.0:*                   LISTEN      -----rac1
[grid@rac1 ~]$ netstat -an |grep 1522|grep LISTEN
tcp        0      0 192.168.209.65:1522         0.0.0.0:*                   LISTEN      -----rac-scan
tcp        0      0 192.168.209.63:1522         0.0.0.0:*                   LISTEN      -----rac1-adg-vip
[grid@rac1 ~]$ lsnrctl status LISTENER

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-FEB-2016 21:41:54

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                16-FEB-2016 17:59:39
Uptime                    0 days 3 hr. 42 min. 14 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.61)(PORT=1521)))      -----rac1
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.63)(PORT=1521)))      -----rac1-vip
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=rac1)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/12.1.0.2/dbhome_1/admin/lunar/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "lunar" has 1 instance(s).
  Instance "lunar1", status READY, has 1 handler(s) for this service...
Service "lunarXDB" has 1 instance(s).
  Instance "lunar1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@rac1 ~]$ 
[grid@rac1 ~]$ lsnrctl status LISTENER_ADG

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-FEB-2016 21:41:59

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_ADG)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_ADG
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                16-FEB-2016 21:13:48
Uptime                    0 days 0 hr. 28 min. 11 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/rac1/listener_adg/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_ADG)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.209.63)(PORT=1522)))     
The listener supports no services
The command completed successfully
[grid@rac1 ~]$ 

配置ORACLE数据库实例支持多个网络:

LUNARRAC1_LOCAL_NET1 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)))
LUNARRAC1_LOCAL_NET2 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-adg-vip)(PORT = 1522)))

LUNARRAC2_LOCAL_NET1 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521)))
LUNARRAC2_LOCAL_NET2 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-adg-vip)(PORT = 1522)))

LUNARRAC_REMOTE_NET1 =(DESCRIPTION_LIST =(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))))
LUNARRAC_REMOTE_NET2 =(DESCRIPTION_LIST =(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = racadg-scan)(PORT = 1522))))
[oracle@rac1 admin]$ ss

SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 18 21:58:50 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SYS@lunar1>
SYS@lunar1>alter system set listener_networks='((NAME=network1)(LOCAL_LISTENER=LUNARRAC1_LOCAL_NET1)(REMOTE_LISTENER=LUNARRAC_REMOTE_NET1))','((NAME=network2)(LOCAL_LISTENER=LUNARRAC1_LOCAL_NET2)(REMOTE_LISTENER=LUNARRAC_REMOTE_NET2))' sid='lunar1';

System altered.

Elapsed: 00:00:00.36
SYS@lunar1>alter system set listener_networks='((NAME=network1)(LOCAL_LISTENER=LUNARRAC2_LOCAL_NET1)(REMOTE_LISTENER=LUNARRAC_REMOTE_NET1))','((NAME=network2)(LOCAL_LISTENER=LUNARRAC2_LOCAL_NET2)(REMOTE_LISTENER=LUNARRAC_REMOTE_NET2))' sid='lunar2';

System altered.

Elapsed: 00:00:00.19
SYS@lunar1>alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1-vip)(PORT=1521))))' sid='lunar1';

System altered.

Elapsed: 00:00:00.03
SYS@lunar1>alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=rac2-vip)(PORT=1521))))' sid='lunar2';

System altered.

Elapsed: 00:00:00.80
SYS@lunar1>

配置客户端连接串
—检查数据库是否可以登录:

[oracle@lunarrac ~]$ ss

SQL*Plus: Release 12.1.0.2.0 Production on Fri Feb 19 08:57:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SYS@lunar1>conn sys/oracle@//lunarrac-scan:1521/lunar as sysdba
Connected.
SYS@//lunarrac-scan:1521/lunar>conn sys/oracle@//lunaradg-scan:1522/lunar as sysdba
Connected.
SYS@//lunaradg-scan:1522/lunar>conn system/oracle@//lunarrac-scan:1521/lunar
Connected.
SYSTEM@//lunarrac-scan:1521/lunar>conn system/oracle@//lunaradg-scan:1522/lunar
Connected.
SYSTEM@//lunaradg-scan:1522/lunar>conn sys/oracle@lunarrac.vip as sysdba
Connected.
SYS@lunarrac.vip>conn sys/oracle@lunarrac.scanip as sysdba
Connected.
SYS@lunarrac.scanip>conn sys/oracle@lunaradg.vip as sysdba
Connected.
SYS@lunaradg.vip>conn sys/oracle@lunaradg.scanip as sysdba
Connected.
SYS@lunaradg.scanip>conn sys/oracle@rac1adg as sysdba
Connected.
SYS@rac1adg>conn sys/oracle@rac2adg as sysdba
Connected.
SYS@rac2adg>conn sys/oracle@racadg.vip as sysdba
Connected.
SYS@racadg.vip>conn sys/oracle@racadg.scanip as sysdba
Connected.
SYS@racadg.scanip>conn sys/oracle@rac1 as sysdba
Connected.
SYS@rac1>conn sys/oracle@rac2 as sysdba
Connected.
SYS@rac2>conn sys/oracle@racdb.vip as sysdba
Connected.
SYS@racdb.vip>conn sys/oracle@racdb.scanip as sysdba
Connected.
SYS@racdb.scanip>

至此已经全部完成。
Oracle 12.1 RAC 系列:
Oracle 12.1 RAC 系列-安装新主机,识别老存储和恢复数据库
Oracle 12.1 RAC 系列 – 配置第二个网络和相应的SCAN2

发表在 Installation and Deinstall, Linux, network, ORACLE 12C, RAC | 标签为 , , , , , | 留下评论

Oracle 12.1 RAC 系列-安装新主机,识别老存储和恢复数据库

11.2中模拟主机损坏,使用重新安装新主机识别老存储并恢复数据库。
11.2 RAC 系列-安装新主机,识别老存储-1-识别ASM磁盘
11.2 RAC 系列-安装新主机,识别老存储-2-准备识别数据库
11.2 RAC 系列-安装新主机,识别老存储-3-配置老存储的数据库
这里的测试也同样是模拟主机损坏,安装新主机识别老存储来恢复数据库,不同之处在于,这里假设老存储的ocr和vf是保存在单独的crsdg的,客户没有新的磁盘来创建新的crsdg,因此,我们需要将最前面的3块盘(除去sda后,是sdb~sdd)使用dd清除其前面50M的数据,然后用这3块盘组成后续安装GI时的CRSDG。
别的过程几乎都一样,添加数据库资源的时候,注意一下12.1跟11.2的命令不同,尽管12.1中如果使用11.2的添加数据库的命令也可以执行,并且没有报错信息(貌似兼容),但后续使用时可能会有问题,比如在ocr中识别的dbunique的信息是不正确的。
.
具体步骤如下(因为先在12.1中测试,然后才在112.测试,因此这里的测试记录了发现的一些问题和处理方法,而11.2中模拟主机损坏,直接使用了这里的经验,因此没有任何报错信息):
1,安装12.1.0.2的GI软件,如果需要也apply最新的PSU,然后查看磁盘和磁盘组:

[grid@lunarrac ~]$ kfod disks=all ds=true cluster=true status=true
--------------------------------------------------------------------------------
 Disk          Size Header    Path                                     Disk Group   User     Group   
================================================================================
   1:       2047 Mb MEMBER    /dev/asm-diskb                           CRSDG        grid     asmadmin
   2:       2047 Mb MEMBER    /dev/asm-diskc                           CRSDG        grid     asmadmin
   3:       2047 Mb MEMBER    /dev/asm-diskd                           CRSDG        grid     asmadmin
   4:       2048 Mb MEMBER    /dev/asm-diske                           DATADG       grid     asmadmin
   5:       2048 Mb MEMBER    /dev/asm-diskf                           DATADG       grid     asmadmin
   6:       2048 Mb MEMBER    /dev/asm-diskg                           DATADG       grid     asmadmin
   7:       3072 Mb MEMBER    /dev/asm-diskh                           DATADG2      grid     asmadmin
   8:      10240 Mb MEMBER    /dev/asm-diski                           DATADG2      grid     asmadmin
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME                                      HOST_NAME           
================================================================================
     +ASM1 /u01/app/12.1.0.2/grid                           lunarrac                       
[grid@lunarrac ~]$ 

创建ASM的spfile

[grid@lunarrac ~]$ ss
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 14 19:08:33 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> create pfile='/tmp/asm.pfile' from spfile;

File created.

SQL>
添加磁盘组'CRSDG', 'DATADG', 'DATADG2'
1
SQL> alter system set asm_diskgroups='CRSDG', 'DATADG', 'DATADG2' scope=both;

System altered.

SQL> 

查找spfile:

SYS@+ASM1>SELECT f.group_number, f.file_number, a.name name, f.type type
  2  FROM v$asm_file f, v$asm_alias a
  3  WHERE f.group_number=a.group_number and f.file_number=a.file_number
  4   and f.type='PARAMETERFILE'
  5  ORDER BY 1, 2;

    GROUP_NUMBER      FILE_NUMBER NAME                                                    TYPE
---------------- ---------------- ------------------------------------------------------- -------------------------
               1              268 spfile.268.903782013                                    PARAMETERFILE
               2              273 spfile.273.892298577                                    PARAMETERFILE

Elapsed: 00:00:00.16
SYS@+ASM1>

这里看到有两个spfile,哪一个是我们需要的呢?
或者如果这个存储上有多个数据库时,怎么确定哪个数据库使用哪个spfle?
我们知道ASM内部是使用OMF管理数据文件的,因此,它的命名规则是:

因此,根据dbuniquename我们就可以确定哪个数据库使用哪个spfile。
+group/DB_UNIQUE_NAME/file_type/file_type_tag.file#.incarnation#
文件类型是datafile, controlfile, onlinelog等等

ASMCMD> ls -l 
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   DEC 25 10:00:00  Y    spfile.273.892298577
ASMCMD> pwd
+datadg2/lunar/PARAMETERFILE
ASMCMD> 

我们将spifle从ASM中复制到文件系统,然后查看其中信息是否正确:

ASMCMD> cp spfile.273.892298577 /tmp/spfile.273.892298577
copying +datadg2/lunar/PARAMETERFILE/spfile.273.892298577 -> /tmp/spfile.273.892298577
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   FEB 14 20:00:00  Y    spfile.273.892298577
ASMCMD> 

查看spfile

[grid@lunarrac ~]$ strings /tmp/spfile.273.892298577 
lunar1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
lunar2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
*._gc_policy_time=0
*._gc_undo_affinity=FALSE
*._index_partition_large_extents='FALSE'
*._optimizer_adaptive_cursor_sharing=FALSE
*._optimizer_extended_cursor_sharing='NONE'
*._optimizer_extended_cursor_sharing_rel='NONE'
*._optimizer_null_aware_antijoin=FALSE
*._optimizer_use_feedback=FALSE
*._partition_large_extents='FALSE'
*._PX_use_large_
pool=TRUE
*._undo_autotune=FALSE
*.audit_file_dest='/u01/app/oracle/admin/lunar/adump'
*.audit_trail='NONE'
*.cluster_database=true
*.compatible='12.1.0.2.0'
*.control_files='+DATADG2/LUNAR/CONTROLFILE/current.257.892295167','+DATADG2/LUNAR/CONTROLFILE/current.258.892295171'
*.db_block_size=8192
*.db_cache_size=52428800
*.db_create_file_dest='+DATADG2'
*.db_domain=''
*.db_name='lunar'
*.db_recovery_file_dest='+DATADG2'
*.db_recovery_file_dest_size=5565m
*.db_writer_processes=2
*.defe
rred_segment_creation=FALSE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=lunarXDB)'
*.event='28401 trace name context forever,level 1','10949 trace name context forever,level 1'
lunar2.instance_number=2
lunar1.instance_number=1
*.java_pool_size=52428800
*.job_queue_processes=5
*.large_pool_size=10485760
*.log_archive_format='%t_%s_%r.dbf'
*.log_buffer=10305536
*.memory_target=0
*.open_cursors=300
*.parallel_force_local=TRUE
*.parallel_max_servers=30
*.pa
rallel_min_servers=30
*.parallel_servers_target=30
*.pga_aggregate_limit=629145600
*.pga_aggregate_target=360m
*.processes=150
*.remote_login_passwordfile='exclusive'
*.resource_limit=TRUE
*.resource_manager_plan=''
*.session_cached_cursors=30
*.sga_target=0
*.shared_pool_size=419430400
lunar2.thread=2
lunar1.thread=1
lunar2.undo_tablespace='UNDOTBS2'
lunar1.undo_tablespace='UNDOTBS1'
[grid@lunarrac ~]$ 

这时,启动是数据库会报错:

[oracle@lunarrac ~]$ ss

SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 14 21:05:13 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@lunar1>startup
ORACLE instance started.

Total System Global Area        557842432 bytes
Fixed Size                        2926520 bytes
Variable Size                   486541384 bytes
Database Buffers                 54525952 bytes
Redo Buffers                     13848576 bytes
ORA-00205: error in identifying control file, check alert log for more info
SYS@lunar1>

alert中报错如下:

ORACLE_BASE from environment = /u01/app/oracle
Sun Feb 14 21:11:42 2016
ALTER DATABASE   MOUNT
Sun Feb 14 21:11:42 2016
This instance was first to mount
Sun Feb 14 21:11:42 2016
NOTE: ASMB mounting group 2 (DATADG2)
Sun Feb 14 21:11:42 2016
ORA-15025: could not open disk "/dev/asm-diskh"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
Sun Feb 14 21:11:42 2016
ORA-15025: could not open disk "/dev/asm-diski"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
NOTE: Disk 0 in group 2 could not be opened.
WARNING: Failed to complete group 2
WARNING: group 2 is being dismounted.
WARNING: ASMB force dismounting group 2 (DATADG2) due to failed mount
SUCCESS: diskgroup DATADG2 was dismounted
NOTE: ASMB mounting group 2 (DATADG2)
Sun Feb 14 21:11:43 2016
ORA-15025: could not open disk "/dev/asm-diskh"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15040: diskgroup is incomplete
ORA-15040: diskgroup is incomplete
Sun Feb 14 21:11:43 2016
ORA-15025: could not open disk "/dev/asm-diski"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
NOTE: Disk 0 in group 2 could not be opened.
WARNING: Failed to complete group 2
WARNING: group 2 is being dismounted.
WARNING: ASMB force dismounting group 2 (DATADG2) due to failed mount
SUCCESS: diskgroup DATADG2 was dismounted
Sun Feb 14 21:11:43 2016
ORA-00210: cannot open the specified control file
ORA-00202: control file: '+DATADG2/LUNAR/CONTROLFILE/current.258.892295171'
ORA-17503: ksfdopn:2 Failed to open file +DATADG2/LUNAR/CONTROLFILE/current.258.892295171
ORA-15001: diskgroup "DATADG2" does not exist or is not mounted
ORA-15040: diskgroup is incomplete
ORA-00210: cannot open the specified control file
ORA-00202: control file: '+DATADG2/LUNAR/CONTROLFILE/current.257.892295167'
ORA-17503: ksfdopn:2 Failed to open file +DATADG2/LUNAR/CONTROLFILE/current.257.892295167
ORA-15001: diskgroup "DATADG2" does not exist or is not mounted
ORA-15040: diskgroup is incomplete
ORA-205 signalled during: ALTER DATABASE   MOUNT...
Sun Feb 14 21:12:07 2016
Decreasing number of real time LMS from 1 to 0

根据报错信息,我们知道,是因为oracle没有访问asm磁盘组的权限造成的,因此需要修改oracle权限:

[grid@lunarrac ~]$ $ORACLE_HOME/bin/setasmgidwrap o=/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle
[grid@lunarrac ~]$ ls -lrt /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 323762276 Feb 14 18:15 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle
[grid@lunarrac ~]$ 

再次mount数据库,依然报错:

SYS@lunar1>alter database mount;
alter database mount
*
ERROR at line 1:
ORA-00205: error in identifying control file, check alert log for more info
Elapsed: 00:00:17.22
SYS@lunar1>

报错信息如下:

alter database mount
Sun Feb 14 21:18:10 2016
This instance was first to mount
Process O000 died, see its trace file
Sun Feb 14 21:18:12 2016
NOTE: ASMB mounting group 2 (DATADG2)
Process O000 died, see its trace file
Sun Feb 14 21:18:16 2016
ORA-15025: could not open disk "/dev/asm-diskh"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15040: diskgroup is incomplete
ORA-15040: diskgroup is incomplete
Sun Feb 14 21:18:19 2016
ORA-15025: could not open disk "/dev/asm-diski"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
NOTE: Disk 0 in group 2 could not be opened.
WARNING: Failed to complete group 2
Sun Feb 14 21:18:24 2016
WARNING: group 2 is being dismounted.
WARNING: ASMB force dismounting group 2 (DATADG2) due to failed mount
SUCCESS: diskgroup DATADG2 was dismounted
NOTE: ASMB mounting group 2 (DATADG2)
Sun Feb 14 21:18:26 2016
ORA-15025: could not open disk "/dev/asm-diskh"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15040: diskgroup is incomplete
ORA-15040: diskgroup is incomplete
Sun Feb 14 21:18:26 2016
ORA-15025: could not open disk "/dev/asm-diski"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
NOTE: Disk 0 in group 2 could not be opened.
WARNING: Failed to complete group 2
WARNING: group 2 is being dismounted.
WARNING: ASMB force dismounting group 2 (DATADG2) due to failed mount
SUCCESS: diskgroup DATADG2 was dismounted
Sun Feb 14 21:18:26 2016
ORA-00210: cannot open the specified control file
ORA-00202: control file: '+DATADG2/LUNAR/CONTROLFILE/current.258.892295171'
ORA-17503: ksfdopn:2 Failed to open file +DATADG2/LUNAR/CONTROLFILE/current.258.892295171
ORA-15001: diskgroup "DATADG2" does not exist or is not mounted
ORA-15040: diskgroup is incomplete
ORA-00210: cannot open the specified control file
ORA-00202: control file: '+DATADG2/LUNAR/CONTROLFILE/current.257.892295167'
ORA-17503: ksfdopn:2 Failed to open file +DATADG2/LUNAR/CONTROLFILE/current.257.892295167
ORA-15001: diskgroup "DATADG2" does not exist or is not mounted
ORA-15040: diskgroup is incomplete
ORA-205 signalled during: alter database mount
...
Process m000 died, see its trace file
Process m001 died, see its trace file
Process m000 died, see its trace file
Process m000 died, see its trace file
Process m001 died, see its trace file
Process m000 died, see its trace file

具体的trace文件如下:

Trace file /u01/app/oracle/diag/rdbms/lunar/lunar1/trace/lunar1_m000_14358.trc
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1
System name:    Linux
Node name:      lunarrac
Release:        3.8.13-44.1.1.el6uek.x86_64
Version:        #2 SMP Wed Sep 10 06:10:25 PDT 2014
Machine:        x86_64
Instance name: lunar1
Redo thread mounted by this instance: 0 <none>
Oracle process number: 0
Unix process pid: 14358, image:
*** 2016-02-14 21:20:01.641
Died during process startup with error 27140 (seq=94)
OPIRIP: Uncaught error 27140. Error stack:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 6000 (oinstall), current egid = 5000 (asmadmin)
~

这里看到,应该是数据库还是不能访问磁盘组,将磁盘组注册到ocr中的过程如下:

SQL> alter diskgroup datadg mount 
Sun Feb 14 21:42:56 2016
NOTE: cache registered group DATADG 2/0x4BA26F20
NOTE: cache began mount (first) of group DATADG 2/0x4BA26F20
NOTE: Assigning number (2,1) to disk (/dev/asm-diskf)
NOTE: Assigning number (2,2) to disk (/dev/asm-diskg)
NOTE: Assigning number (2,0) to disk (/dev/asm-diske)
Sun Feb 14 21:43:02 2016
NOTE: GMON heartbeating for grp 2 (DATADG)
GMON querying group 2 at 17 for pid 30, osid 19564
Sun Feb 14 21:43:02 2016
NOTE: cache is mounting group DATADG created on 2015/10/05 07:49:39
NOTE: cache opening disk 0 of grp 2: DATADG_0000 path:/dev/asm-diske
NOTE: 02/14/16 21:43:02 DATADG.F1X0 found on disk 0 au 10 fcn 0.0 datfmt 1
NOTE: cache opening disk 1 of grp 2: DATADG_0001 path:/dev/asm-diskf
NOTE: 02/14/16 21:43:02 DATADG.F1X0 found on disk 1 au 10 fcn 0.0 datfmt 1
NOTE: cache opening disk 2 of grp 2: DATADG_0002 path:/dev/asm-diskg
NOTE: 02/14/16 21:43:02 DATADG.F1X0 found on disk 2 au 10 fcn 0.0 datfmt 1
Sun Feb 14 21:43:02 2016
NOTE: cache mounting (first) normal redundancy group 2/0x4BA26F20 (DATADG)
Sun Feb 14 21:43:02 2016
* allocate domain 2, invalid = TRUE 
Sun Feb 14 21:43:03 2016
NOTE: attached to recovery domain 2
Sun Feb 14 21:43:03 2016
* validated domain 2, flags = 0x0
NOTE: cache recovered group 2 to fcn 0.73
NOTE: redo buffer size is 256 blocks (1056768 bytes)
Sun Feb 14 21:43:03 2016
NOTE: LGWR attempting to mount thread 1 for diskgroup 2 (DATADG)
NOTE: LGWR found thread 1 closed at ABA 11.22 lock domain=0 inc#=0 instnum=1
NOTE: LGWR mounted thread 1 for diskgroup 2 (DATADG)
Sun Feb 14 21:43:03 2016
NOTE: LGWR opened thread 1 (DATADG) at fcn 0.73 ABA 12.23 lock domain=2 inc#=2 instnum=1 gx.incarn=1268936480 mntstmp=2016/02/14 21:43:03.557000
Sun Feb 14 21:43:03 2016
NOTE: cache mounting group 2/0x4BA26F20 (DATADG) succeeded
NOTE: cache ending mount (success) of group DATADG number=2 incarn=0x4ba26f20
Sun Feb 14 21:43:03 2016
NOTE: Instance updated compatible.asm to 12.1.0.0.0 for grp 2
Sun Feb 14 21:43:03 2016
SUCCESS: diskgroup DATADG was mounted
Sun Feb 14 21:43:03 2016
SUCCESS: alter diskgroup datadg mount
Sun Feb 14 21:43:04 2016
NOTE: diskgroup resource ora.DATADG.dg is online
WARNING: unknown state for diskgroup resource ora.DATADG.dg, Return Value: 3

再次查看,ocr中已经包含了这些磁盘组

[root@lunarrac ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.DATADG.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.DATADG2.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunarrac                 STABLE
ora.asm
               ONLINE  ONLINE       lunarrac                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       lunarrac                 STABLE
ora.ons
               ONLINE  ONLINE       lunarrac                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       lunarrac                 169.254.52.98 192.16
                                                             8.60.56,STABLE
ora.cvu
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.lunarrac.vip
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.oc4j
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       lunarrac                 STABLE
--------------------------------------------------------------------------------
[root@lunarrac ~]# 

将数据库注册到ocr中:
在12.1中如果沿用11.2的配置数据库命令,那么数据库可以启动,但是可以发现配置信息是有问题的:
例如,“Database name: lunarrac”这里显示lunarrac是我的主机名,而数据库名是lunar,因此使用112.的命令注册数据库到ocr会有其他未知问题
建议,在不同版本,根据不同版本的命令进行注册。

[oracle@lunarrac ~]$ srvctl add database -d lunar -n lunarrac -o $ORACLE_HOME -p +datadg2/lunar/PARAMETERFILE/spfile.273.892298577 -s OPEN -y AUTOMATIC -a "CRSDG,DATADG,DATADG2" -t IMMEDIATE
[oracle@lunarrac ~]$ srvctl config database -d lunar
Database unique name: lunar
Database name: lunarrac
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: +datadg2/lunar/PARAMETERFILE/spfile.273.892298577
Password file: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: CRSDG,DATADG,DATADG2
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: 
Configured nodes: 
Database is administrator managed
[oracle@lunarrac ~]$ 

检查crs状态,数据库已经注册进去了:

[root@lunarrac ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.DATADG.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.DATADG2.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunarrac                 STABLE
ora.asm
               ONLINE  ONLINE       lunarrac                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       lunarrac                 STABLE
ora.ons
               ONLINE  ONLINE       lunarrac                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       lunarrac                 169.254.52.98 192.16
                                                             8.60.56,STABLE
ora.cvu
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.lunar.db
      1        OFFLINE OFFLINE                               STABLE
ora.lunarrac.vip
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.oc4j
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       lunarrac                 STABLE
--------------------------------------------------------------------------------
[root@lunarrac ~]# 

启动数据库:

[oracle@lunarrac ~]$ srvctl start database -d lunar
[oracle@lunarrac ~]$ 
.
[root@lunarrac ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.DATADG.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.DATADG2.dg
               ONLINE  ONLINE       lunarrac                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunarrac                 STABLE
ora.asm
               ONLINE  ONLINE       lunarrac                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       lunarrac                 STABLE
ora.ons
               ONLINE  ONLINE       lunarrac                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       lunarrac                 169.254.52.98 192.16
                                                             8.60.56,STABLE
ora.cvu
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.lunar.db
      1        ONLINE  ONLINE       lunarrac                 Open,STABLE
ora.lunarrac.vip
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.oc4j
      1        ONLINE  ONLINE       lunarrac                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       lunarrac                 STABLE
--------------------------------------------------------------------------------
[root@lunarrac ~]# 

在挂完老存储后,还需要检查口令文件,如果有必要,需要重新给sys和system授予sysdba,sysoper权限,以便使用操作系统认证方式登录数据库。

发表在 ASM, ORACLE 12C, RAC | 标签为 , , , | 留下评论

Oracle 12.2的Sharding-1-基础概念

2015年8月份内部release了Oracle 12.2beta版本(目前内部最新release的版本是2016年2月份发布的,windows和Linux都有了),目前根据12.2beta文档的介绍,Oracle推出了sharding的功能,跟其他NOSQL型的sharding结构相比,Oracle Sharding提供的是企业级的RDBMS的分片技术。
.
Oracle Sharding的优点:
• Relational schemas
• Database partitioning
• ACID properties and read consistency
• SQL and other programmatic interfaces
• Complex data types
• Online schema changes
• Multi-core scalability
• Advanced security
• Compression
• High Availability features
• Enterprise-scale backup and recovery
.
在Oracle RDBMS 12.2.0.1中最多支持1000个shards。
.
Oracle Sharding使用GDS(Global Data Services)架构来自动部署和管理sharding和复制技术。
GDS(GDS是Oracle RDBMS 12.1的新特性)也提供负载均衡和SDB(sharded database)中的基于位置的路由功能。
.
Shard目录(Shard directors)使用GDS framework的全局服务管理组件(global service manager component)来提供应用层请求到shard的直接路由。shard目录(Shard directors)是一个单独的数据库,它用来保存SDB(Sharding database)配置数据和提供其他相关功能,比如shard的交叉查询和集中管理。可以使用GDS是GDSCTL工具可以用来配置SDB。

Oracle Sharding的分区架构(Partitioning Infrastructure)
分区在表空间级别跨shards分布,每个表空间关联一个特定的shard。一个shard表的每一个分区放单独的表空间,并且每个表空间关联到一个特定的shard。根据不同的sharding方法,这个关联可以自动建立或者根据定义创建。尽管一个shard表的多个分区放在多个单独主机的数据库上(这些数据库完全独立,不共享CPU、内存等软件和硬件),但是应用访问表时就如同访问一个单独数据库中的分区表一样。应用发出的SQL语句不需要依赖shard号和shard的物理配置。
.
Oracle Sharding 使用 familiar SQL 语法创建表分区,指定分区表的每行数据如何分片。
一个shard表的分区键叫做sharding key,例如,下面的语法是典型的用来创建sharding表的:
CREATE SHARDED TABLE customers
( cust_id NUMBER NOT NULL
, name VARCHAR2(50)
, address VARCHAR2(250)
, region VARCHAR2(20)
, class VARCHAR2(3)
, signup DATE
CONSTRAINT cust_pk PRIMARY KEY(cust_id)
)
PARTITION BY CONSISTENT HASH (cust_id)
TABLESPACE SET ts1
PARTITIONS AUTO;
这个数据分片(shard)就是基于键值cust_id,分区采用“CONSISTENT HASH”,这是一个特定的hash分区类型,通常用在分布式系统上。

.
Sharding a Table Family
一个表家族(Table Family)中没有任何父表的表叫做根表(root table),每个表家族中只能有一个根表。
表家族中所有的表按照根表的主键进行sharding,根据各级表的结构,相关数据可以被存储在同一个shard上。
在12.2,在一个SDB中只支持一个表家族。
.
以下面的例子说明,这里一共3张表组成的表家族(Table Family)
客户表,订单表和订单明细表。
每个客户可以有多个订单,每个订单中可以有多个商品,因此订单明细中就记录了每个订单中的多个商品,他们的具体数据如下:


1


在这个表族中,客户编号为123的数据如下:


2


.
将一个表族(Sharded Table Family)分片通常使有下面两种方法创建:
方法1:不显示指定父子关系,而是通过表之间主外键关系创建表族。这种方式创建的表族是一个多级的树形结构。
根表(root table)是客户表:
–客户表的主键是CustNo,分区方式是“CONSISTENT HASH (CustNo)”
–保存再表空间集ts1中

CREATE SHARDED TABLE Customers
( CustNo NUMBER NOT NULL
, Name VARCHAR2(50)
, Address VARCHAR2(250)
, CONSTRAINT RootPK PRIMARY KEY(CustNo)
)
PARTITION BY CONSISTENT HASH (CustNo)
PARTITIONS AUTO
TABLESPACE SET ts1
;

–订单表是客户表的字表,子表(订单表)根据CustNo关联父表(客户表):
–订单表的主键是(CustNo, OrderNo),外键(CustNo)引用了主表Customers(CustNo)
–分区方式是按照订单表的外键约束(CustFK)

CREATE SHARDED TABLE Orders
( OrderNo NUMBER NOT NULL
, CustNo NUMBER NOT NULL
, OrderDate DATE
, CONSTRAINT OrderPK PRIMARY KEY (CustNo, OrderNo)
, CONSTRAINT CustFK FOREIGN KEY (CustNo) REFERENCES Customers(CustNo)
)
PARTITION BY REFERENCE (CustFK)
;

–订单明细表是订单表的字表,子表(订单明细表)根据CustNo关联父表(订单表)
–订单明细表的主键是(CustNo, OrderNo, LineNo),外键(CustNo, OrderNo)引用了父表Orders(OrderNo)和Orders(CustNo, OrderNo)
–分区方式是按照订单明细表的外键约束(LineFK)

CREATE SHARDED TABLE LineItems
( CustNo NUMBER NOT NULL
, LineNo NUMBER(2) NOT NULL
, OrderNo NUMBER(5) NOT NULL
, StockNo NUMBER(4)
, Quantity NUMBER(2)
, CONSTRAINT LinePK PRIMARY KEY (CustNo, OrderNo, LineNo)
, CONSTRAINT LineFK FOREIGN KEY (CustNo, OrderNo) REFERENCES Orders(OrderNo) 
REFERENCES Orders(CustNo, OrderNo)
)
PARTITION BY REFERENCE (LineFK)
;

因此,上面的例子中,这个表家族的所有数据都保存在同一个表空间集ts1中。
当根表中增加一个分区的时候,那么相关联的表中都会自动增加相应的分区。

.
方法2:在分区表中显示指定父子关系的方法创建表家族
这种分区方法只支持两级的表家族(two-level table families),所有的子表必须有相同的父表,父表的分区列在每个子表中都存在,例如下面的CustNo.
.
–没有关键字“PARENT”(也没有上面引用约束关键字)的是根表,即客户表(Customers)

CREATE SHARDED TABLE Customers
( CustNo NUMBER NOT NULL
, Name VARCHAR2(50)
, Address VARCHAR2(250)
, region VARCHAR2(20)
, class VARCHAR2(3)
, signup DATE
)
PARTITION BY CONSISTENT HASH (CustNo)
TABLESPACE SET ts1
PARTITIONS AUTO
;

–根据关键字“PARENT Customers”指定了订单表(Orders)的父表是客户表(Customers)

CREATE SHARDED TABLE Orders
( OrderNo NUMBER
, CustNo NUMBER
, OrderDate DATE
)
PARENT Customers
PARTITION BY CONSISTENT HASH (CustNo)
TABLESPACE SET ts1
PARTITIONS AUTO
;

–根据关键字“PARENT Customers”指定了订单明细表(LineItems)的父表是客户表(Customers)

CREATE SHARDED TABLE LineItems
( LineNo NUMBER
, OrderNo NUMBER
, CustNo NUMBER
, StockNo NUMBER
, Quantity NUMBER
)
)
PARENT Customers
PARTITION BY CONSISTENT HASH (CustNo)
TABLESPACE SET ts1
PARTITIONS AUTO
;

Creating a Duplicated Table Using CREATE TABLE
复制表可以被复制到所有的shard上,这种在每个shard上有相同内容的表叫做复制表(Duplicated Table)
需要经常跟shard表关联的小表适合于作为复制表(Duplicated Table),适用于:
(1)只读表
(2)大量跨shard的读操作
.
Oracle Sharding使用Materialized View Replication来同步复制表(duplicated tables)的内容
每个shard上的duplicated tables的内容是一个只读物化视图(read-only materialized view.)
物化视图(materialized views)的主表保存在一个专门的数据库中,叫做shard catalog。
所有shard上的物化视图(materialized views)会根据配置的频率自动刷新。
创建复制表的语句“CREATE DUPLICATED TABLE”会自动创建master表,物化视图和其他物化视图复制所需要的对象。
还是以上面的客户订单关系为例,这里定义产品表(Products)为复制表:


3


CREATE DUPLICATED TABLE Products
( StockNo NUMBER PRIMARY KEY
, Description VARCHAR2(20)
, Price NUMBER(6,2))
)
;

根据sharding的机制,sharding的设计对后续系统性能影响是非常大的。
一旦sharding创建完成,并已经有很多数据,相关的属性就不能再修改了,比如某个表是复制表,还是sharding表,sharding key等等
因此,SDB的设计是至关重要的,再设计sharding是需要考虑的有:
那些表需要被设计为sharding表
哪些表需要做复制表
哪些shard表是根表
使用什么方法来关联一个表到其他表或者根表
应该使用哪种sharding方法
使用哪个座位sharding key

发表在 ORACLE 12C | 标签为 , , | 留下评论

11.2 RAC 系列-安装新主机,识别老存储-3-配置老存储的数据库

11.2 RAC 系列-安装新主机,识别老存储-1-识别ASM磁盘
11.2 RAC 系列-安装新主机,识别老存储-2-准备识别数据库
11.2 RAC 系列-安装新主机,识别老存储-3-配置老存储的数据库
Oracle 12.1 RAC 系列-安装新主机,识别老存储和恢复数据库
安装Oracle 11.2.0.4数据库软件,然后执行root.sh,这个没有特别的东西,略。
之后,我们需要修改ORACLE RDBMS的oracle二进制文件的权限,让oracle 数据库进程可以获取ASM磁盘组。

[root@lunar5 ~]# su - grid
[grid@lunar5 ~]$ $ORACLE_HOME/bin/setasmgidwrap o=/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle
[grid@lunar5 ~]$ ll /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 239626689 Feb 15 22:09 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle
[grid@lunar5 ~]$ 

注意,这里的/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle就是安装ORACLE RDBMS的ORACLE_HOME。
然后,将数据库添加到CRS中,启动数据库:

[oracle@lunar5 ~]$ srvctl add database -d lunar -n lunar5 -o $ORACLE_HOME -p +DATADG2/LUNAR/PARAMETERFILE/spfile.272.892409049 -s OPEN -y AUTOMATIC -a "TEMPDG,DATADG1,DATADG2,DATADG3" -t IMMEDIATE
[oracle@lunar5 ~]$ 

检查数据库在ocr中的配置:

[oracle@lunar5 ~]$ srvctl config database -d lunar
Database unique name: lunar
Database name: lunar5
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1
Oracle user: oracle
Spfile: +DATADG2/LUNAR/PARAMETERFILE/spfile.272.892409049
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: lunar
Database instances: 
Disk Groups: TEMPDG,DATADG1,DATADG2,DATADG3
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed
[oracle@lunar5 ~]$ 

启动数据库:

[oracle@lunar5 ~]$ srvctl start database -d lunar
[oracle@lunar5 ~]$ 

检查crs的状态:

[root@lunar5 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG1.dg
               ONLINE  ONLINE       lunar5                                       
ora.DATADG2.dg
               ONLINE  ONLINE       lunar5                                       
ora.DATADG3.dg
               ONLINE  ONLINE       lunar5                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunar5                                       
ora.TEMPDG.dg
               ONLINE  ONLINE       lunar5                                       
ora.asm
               ONLINE  ONLINE       lunar5                   Started             
ora.gsd
               OFFLINE OFFLINE      lunar5                                       
ora.net1.network
               ONLINE  ONLINE       lunar5                                       
ora.ons
               ONLINE  ONLINE       lunar5                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunar5                                       
ora.cvu
      1        ONLINE  ONLINE       lunar5                                       
ora.lunar.db
      1        ONLINE  ONLINE	    lunar5                   Open          
ora.lunar5.vip
      1        ONLINE  ONLINE       lunar5                                       
ora.oc4j
      1        ONLINE  ONLINE       lunar5                                       
ora.scan1.vip
      1        ONLINE  ONLINE       lunar5                                       
[root@lunar5 ~]# 

至此,整个使用新主机识别老存储的RAC(主要是识别ASM)就完成了。如果是文件系统的环境,比这个简单很多,ASM的全部可以省略了。
.
Oracle 12.1 RAC 系列:
Oracle 12.1 RAC 系列-安装新主机,识别老存储和恢复数据库
Oracle 12.1 RAC 系列 – 配置第二个网络和相应的SCAN2

发表在 ASM, Installation and Deinstall, Linux, Oracle 11.1 & Oracle11.2, RAC | 标签为 , , | 留下评论

11.2 RAC 系列-安装新主机,识别老存储-2-准备识别数据库

11.2 RAC 系列-安装新主机,识别老存储-1-识别ASM磁盘
11.2 RAC 系列-安装新主机,识别老存储-2-准备识别数据库
11.2 RAC 系列-安装新主机,识别老存储-3-配置老存储的数据库
Oracle 12.1 RAC 系列-安装新主机,识别老存储和恢复数据库
假设原来的主机已经完全不能启动了(比如硬件故障等),只能在存储上的ASM中查找数据库使用的参数文件:

[grid@lunar5 ~]$ ss

SQL*Plus: Release 11.2.0.4.0 Production on Mon Feb 15 21:48:15 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SYS@+ASM1>col name for a55
SYS@+ASM1>col type for a25
SYS@+ASM1>SELECT f.group_number, f.file_number, a.name name, f.type type
  2  FROM v$asm_file f, v$asm_alias a
  3  WHERE f.group_number=a.group_number and f.file_number=a.file_number
  4   and f.type='PARAMETERFILE'
  5  ORDER BY 1, 2;

    GROUP_NUMBER      FILE_NUMBER NAME                                                    TYPE
---------------- ---------------- ------------------------------------------------------- -------------------------
               2              272 spfile.272.892409049                                    PARAMETERFILE
               2              272 spfilelunar.ora                                         PARAMETERFILE

Elapsed: 00:00:00.09
SYS@+ASM1>

这里看到,数据库使用的参数文件是spfilelunar.ora,它是spfile.272.892409049的别名文件。
我们在ASM中查看一下:

ASMCMD> pwd
+datadg2/lunar
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
                                                 Y    ARCHIVELOG/
                                                 Y    CONTROLFILE/
                                                 Y    DATAFILE/
                                                 Y    ONLINELOG/
                                                 Y    PARAMETERFILE/
                                                 Y    TEMPFILE/
                                                 N    spfilelunar.ora => +DATADG2/LUNAR/PARAMETERFILE/spfile.272.892409049
ASMCMD>    
我们将这个文件复制到文件系统,然后查看该文件内容,以便确定是否为我们需要使用的数据库的参数文件:
1
ASMCMD> cp +DATADG2/LUNAR/PARAMETERFILE/spfile.272.892409049 /tmp/lunar.db.spfile
copying +DATADG2/LUNAR/PARAMETERFILE/spfile.272.892409049 -> /tmp/lunar.db.spfile
ASMCMD> 

检查数据库的spfile的内容:

[root@lunar5 ~]# ll /tmp/lunar.db.spfile
-rw-r----- 1 grid oinstall 2560 Feb 15 21:51 /tmp/lunar.db.spfile
[root@lunar5 ~]# strings /tmp/lunar.db.spfile
*.audit_file_dest='/u01/app/oracle/admin/lunar/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.4.0'
*.control_files='+DATADG2/lunar/controlfile/current.256.892408309','+DATADG2/lunar/controlfile/current.257.892408311'
*.db_block_size=8192
*.db_cache_size=157286400
*.db_create_file_dest='+DATADG2'
*.db_domain=''
*.db_name='lunar'
*.db_recovery_file_dest='+DATADG2'
*.db_recovery_file_dest_size=4194304000
*.diagnostic_dest='/u01/app/oracle'
lunar2.instance_number=
lunar1.instance_number=1
*.java_pool_size=31457280
*.large_pool_size=31457280
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=228589568
*.processes=150
*.remote_listener='lunar-scan:1521'
*.remote_login_passwordfile='exclusive'
*.shared_pool_size=419430400
lunar2.thread=2
lunar1.thread=1
lunar2.undo_tablespace='UNDOTBS2'
lunar1.undo_tablespace='UNDOTBS1'
[root@lunar5 ~]# 

这里确定的,该文件+datadg2/lunar/spfilelunar.ora(也就是+DATADG2/LUNAR/PARAMETERFILE/spfile.272.892409049)就是我们需要使用的数据库参数文件。

发表在 ASM, Installation and Deinstall, Oracle 11.1 & Oracle11.2, RAC | 标签为 , , | 留下评论

11.2 RAC 系列-安装新主机,识别老存储-1-识别ASM磁盘

在有些场景下,RAC环境中如果主机出现问题,比如硬件故障等,不能启动,我们需要尽快存储上的启动数据库,恢复业务,那么就需要迁移以前的RAC环境到新的主机环境下,我测试了11.2和12.1的RAC,恢复过程还是很快的,基本上就是安装软件的过程,如果真实场景恢复业务,有两种方法:
1,按照我这里的方法重新安装主机,恢复RAC和数据库
2,如果之前有可用的操作系统的备份(比如NBU备份了OS),那么直接使用NBU还原即可
.
我这里测试的是方法1,重新安装11204的GI(Grid Infrastructure)和ORACLE RDBMS软件,然后识别老存储。
测试环境:单节点RAC, 操作系统是OEL Linux 6.6, 数据库版本是11.2.0.4
11.2 RAC 系列-安装新主机,识别老存储-1-识别ASM磁盘
11.2 RAC 系列-安装新主机,识别老存储-2-准备识别数据库
11.2 RAC 系列-安装新主机,识别老存储-3-配置老存储的数据库
Oracle 12.1 RAC 系列-安装新主机,识别老存储和恢复数据库
.
首先,因为存储使用的是11204的ASM,测试过程只安装11204的GI(Grid Infrastructure)软件,不用OUI配置GI。

[root@lunar5 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@lunar5 ~]# 

执行root.sh:

[root@lunar5 ~]# /u01/app/11.2.0.4/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0.4/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'lunar5'
CRS-2676: Start of 'ora.mdnsd' on 'lunar5' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'lunar5'
CRS-2676: Start of 'ora.gpnpd' on 'lunar5' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lunar5'
CRS-2672: Attempting to start 'ora.gipcd' on 'lunar5'
CRS-2676: Start of 'ora.cssdmonitor' on 'lunar5' succeeded
CRS-2676: Start of 'ora.gipcd' on 'lunar5' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'lunar5'
CRS-2672: Attempting to start 'ora.diskmon' on 'lunar5'
CRS-2676: Start of 'ora.diskmon' on 'lunar5' succeeded
CRS-2676: Start of 'ora.cssd' on 'lunar5' succeeded

ASM created and started successfully.

Disk Group TEMPDG created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 381226346c054f62bf5f219ee16d7e25.
Successfully replaced voting disk group with +TEMPDG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   381226346c054f62bf5f219ee16d7e25 (/dev/asm-diski) [TEMPDG]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'lunar5'
CRS-2676: Start of 'ora.asm' on 'lunar5' succeeded
CRS-2672: Attempting to start 'ora.TEMPDG.dg' on 'lunar5'
CRS-2676: Start of 'ora.TEMPDG.dg' on 'lunar5' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@lunar5 ~]# 

相应的checkpoint文件内容:

[root@lunar5 ~]# ll /u01/app/grid/Clusterware/ckptGridHA_lunar5.xml
-rw-r--r-- 1 grid oinstall 4176 Feb 15 19:31 /u01/app/grid/Clusterware/ckptGridHA_lunar5.xml
[root@lunar5 ~]# 
[root@lunar5 ~]# tail  /u01/app/grid/Clusterware/ckptGridHA_lunar5.xml
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_OSDSETUP" DESC="ROOTCRS_OSDSETUP" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_ONETIME" DESC="ROOTCRS_ONETIME" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_OLR" DESC="ROOTCRS_OLR" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_GPNPSETUP" DESC="ROOTCRS_GPNPSETUP" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_OHASD" DESC="ROOTCRS_OHASD" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_INITRES" DESC="ROOTCRS_INITRES" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_ACFSINST" DESC="ROOTCRS_ACFSINST" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_BOOTCFG" DESC="ROOTCRS_BOOTCFG" STATE="SUCCESS"/>
   <CHECKPOINT LEVEL="MAJOR" NAME="ROOTCRS_NODECONFIG" DESC="ROOTCRS_NODECONFIG" STATE="SUCCESS"/>
</CHECKPOINTS>
[root@lunar5 ~]# 

这里看到“DESC=”ROOTCRS_NODECONFIG” STATE=”SUCCESS””表示GI已经配置完成。
图形界面点击ok,继续执行其余配置,配置完成后,再次检查checkpoint文件:

[root@lunar5 ~]# ll  /u01/app/grid/Clusterware/ckptGridHA_lunar5.xml
-rw-r--r-- 1 grid oinstall 4176 Feb 15 19:31 /u01/app/grid/Clusterware/ckptGridHA_lunar5.xml
[root@lunar5 ~]# cd /u01/app/grid/Clusterware/
[root@lunar5 Clusterware]# ll
total 8
-rw-r--r-- 1 grid oinstall 4176 Feb 15 19:31 ckptGridHA_lunar5.xml
[root@lunar5 Clusterware]# 

这里看到,checkpoint文件的日期没有变化,说明checkpoint文件是执行root.sh的时候才有用的,也就是这个过程是11.2中为了方便客户,增加了root.sh的失败后继续配置二设计的,非常体贴的功能。在12.2中,该功能更加方便,他将会只管的告诉你当前配置的检查点情况,如果有些步骤失败后,oracle会自动清除老的配置,以便可以失败安装后不用重装,而是纠正错误后继续配置,类似“断点续传”那种意思。
.
比如,下面是12.2beta上安装RAC时执行root.sh的过程:

[root@lunar1 ~]#  /stage/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /stage

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option

Using configuration parameter file: /stage/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/lunar1/crsconfig/rootcrs_lunar1_2016-01-24_12-44-34AM.log
2016/01/24 12:44:34 CLSRSC-46: Error: '/stage/crs/install/ParentDirPerm_lunar1.txt' does not exist

2016/01/24 12:44:34 CLSRSC-46: Error: '/stage/crs/install/ParentDirPerm_lunar1.txt' does not exist

2016/01/24 12:44:35 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.

2016/01/24 12:44:35 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2016/01/24 12:44:35 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2016/01/24 12:44:35 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.

2016/01/24 12:44:37 CLSRSC-363: User ignored prerequisites during installation

2016/01/24 12:44:37 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.

2016/01/24 12:44:38 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

2016/01/24 12:44:39 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.

2016/01/24 12:44:40 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.

2016/01/24 12:44:41 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.

2016/01/24 12:44:41 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.

2016/01/24 12:44:42 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.

2016/01/24 12:44:43 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.

2016/01/24 12:45:30 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.

2016/01/24 12:45:31 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.

2016/01/24 12:45:46 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

2016/01/24 12:46:08 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.

2016/01/24 12:46:11 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.

2016/01/24 12:46:11 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.

2016/01/24 12:46:14 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lunar1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'lunar1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'lunar1'
CRS-2677: Stop of 'ora.mdnsd' on 'lunar1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'lunar1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'lunar1'
CRS-2677: Stop of 'ora.evmd' on 'lunar1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'lunar1'
CRS-2677: Stop of 'ora.gipcd' on 'lunar1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lunar1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2016/01/24 12:46:23 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.

CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'lunar1'
CRS-2672: Attempting to start 'ora.evmd' on 'lunar1'
CRS-2676: Start of 'ora.mdnsd' on 'lunar1' succeeded
CRS-2676: Start of 'ora.evmd' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'lunar1'
CRS-2676: Start of 'ora.gpnpd' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'lunar1'
CRS-2676: Start of 'ora.gipcd' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lunar1'
CRS-2676: Start of 'ora.cssdmonitor' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'lunar1'
CRS-2672: Attempting to start 'ora.diskmon' on 'lunar1'
CRS-2676: Start of 'ora.diskmon' on 'lunar1' succeeded
CRS-2676: Start of 'ora.cssd' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'lunar1'
CRS-2672: Attempting to start 'ora.ctssd' on 'lunar1'
CRS-2676: Start of 'ora.ctssd' on 'lunar1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'lunar1'
CRS-2676: Start of 'ora.asm' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'lunar1'
CRS-2676: Start of 'ora.storage' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'lunar1'
CRS-2676: Start of 'ora.crf' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'lunar1'
CRS-2676: Start of 'ora.crsd' on 'lunar1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: lunar1
CRS-6016: Resource auto-start has completed for server lunar1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2016/01/24 12:47:34 CLSRSC-343: Successfully started Oracle Clusterware stack

2016/01/24 12:47:34 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.

CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'lunar1'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'lunar1'
CRS-2676: Start of 'ora.asm' on 'lunar1' succeeded
CRS-2672: Attempting to start 'ora.CRSDG.dg' on 'lunar1'
CRS-2676: Start of 'ora.CRSDG.dg' on 'lunar1' succeeded
2016/01/24 12:48:43 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2016/01/24 12:49:04 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@lunar1 ~]# 

可以看到,这个过程比12.2以前的RAC执行root.sh的提示更加清晰了。
好吧,回到我们的环境中,继续检查老的asm磁盘组

[grid@lunar5 ~]$ kfod disks=all ds=true cluster=true status=true
--------------------------------------------------------------------------------
 Disk          Size Header    Path                                     Disk Group   User     Group   
================================================================================
   1:        512 Mb MEMBER    /dev/asm-diskb                           DATADG1      grid     asmadmin
   2:        512 Mb MEMBER    /dev/asm-diskc                           DATADG1      grid     asmadmin
   3:        512 Mb MEMBER    /dev/asm-diskd                           DATADG1      grid     asmadmin
   4:       1024 Mb MEMBER    /dev/asm-diske                           CRSDG        grid     asmadmin
   5:       1024 Mb MEMBER    /dev/asm-diskf                           CRSDG        grid     asmadmin
   6:       1024 Mb MEMBER    /dev/asm-diskg                           CRSDG        grid     asmadmin
   7:       5120 Mb MEMBER    /dev/asm-diskh                           DATADG2      grid     asmadmin
   8:       3072 Mb MEMBER    /dev/asm-diski                           TEMPDG       grid     asmadmin
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME                                      HOST_NAME           
================================================================================
     +ASM1 /u01/app/11.2.0.4/grid                           lunar5              
[grid@lunar5 ~]$ 

将上述磁盘组添加到ASM启动磁盘组的列表中:

[grid@lunar5 ~]$ ss

SQL*Plus: Release 11.2.0.4.0 Production on Mon Feb 15 21:31:29 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter asm

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups                       string
asm_diskstring                       string      /dev/asm*
asm_power_limit                      integer     1
asm_preferred_read_failure_groups    string
SQL> alter system set asm_diskgroups='TEMPDG','CRSDG', 'DATADG1', 'DATADG2' scope=both;

System altered.

SQL> 

对新添加的磁盘组执行mount和dismount后,这些磁盘组就会自动添加到ocr中:

[root@lunar5 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       lunar5                                       
ora.DATADG1.dg
               ONLINE  ONLINE       lunar5                                       
ora.DATADG2.dg
               ONLINE  ONLINE       lunar5                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunar5                                       
ora.TEMPDG.dg
               ONLINE  ONLINE       lunar5                                       
ora.asm
               ONLINE  ONLINE       lunar5                   Started             
ora.gsd
               OFFLINE OFFLINE      lunar5                                       
ora.net1.network
               ONLINE  ONLINE       lunar5                                       
ora.ons
               ONLINE  ONLINE       lunar5                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunar5                                       
ora.cvu
      1        ONLINE  ONLINE       lunar5                                       
ora.lunar5.vip
      1        ONLINE  ONLINE       lunar5                                       
ora.oc4j
      1        ONLINE  ONLINE       lunar5                                       
ora.scan1.vip
      1        ONLINE  ONLINE       lunar5                                       
[root@lunar5 ~]# 

检查ASM磁盘组中的内容(比如包含哪些数据库),因为ASM是采用OMF的,因此大部分情况下,通过磁盘组的目录结构就可以看出来这个存储中以前包含哪些数据库和其他相关信息:

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      3072     2769             1024             872              0             N  CRSDG/
MOUNTED  NORMAL  N         512   4096  1048576      1536     1251              512             369              0             N  DATADG1/
MOUNTED  EXTERN  N         512   4096  1048576      5120     3386                0            3386              0             N  DATADG2/
MOUNTED  EXTERN  N         512   4096  1048576      3072     2719                0            2719              0             Y  TEMPDG/
ASMCMD> cd +crsdg
ASMCMD> ls
ASMCMD> cd +datadg1
ASMCMD> ls
ASMCMD> cd +datadg2
ASMCMD> ls
LUNAR/
ASMCMD> 

这里看到,该存储中,包含了一个数据库,crsdg中的信息已经被我删除了,因此,没有老的ocr和voting disk文件了

现在,已经确定只有datadg2上面有数据,因此可以重组磁盘组,以便合理利用空间(后续我要用这个vm做很多其他测试):

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  4194304      1536     1296              512             392              0             N  DATADG1/
MOUNTED  EXTERN  N         512   4096  1048576      5120     3386                0            3386              0             N  DATADG2/
MOUNTED  EXTERN  N         512   4096  1048576      3072     3016                0            3016              0             N  DATADG3/
MOUNTED  EXTERN  N         512   4096  1048576      3072     2719                0            2719              0             Y  TEMPDG/
ASMCMD> 

使用kfod查看:

[grid@lunar5 ~]$ kfod disks=all ds=true cluster=true status=true
--------------------------------------------------------------------------------
 Disk          Size Header    Path                                     Disk Group   User     Group   
================================================================================
   1:        512 Mb MEMBER    /dev/asm-diskb                           DATADG1      grid     asmadmin
   2:        512 Mb MEMBER    /dev/asm-diskc                           DATADG1      grid     asmadmin
   3:        512 Mb MEMBER    /dev/asm-diskd                           DATADG1      grid     asmadmin
   4:       1024 Mb MEMBER    /dev/asm-diske                           DATADG3      grid     asmadmin
   5:       1024 Mb MEMBER    /dev/asm-diskf                           DATADG3      grid     asmadmin
   6:       1024 Mb MEMBER    /dev/asm-diskg                           DATADG3      grid     asmadmin
   7:       5120 Mb MEMBER    /dev/asm-diskh                           DATADG2      grid     asmadmin
   8:       3072 Mb MEMBER    /dev/asm-diski                           TEMPDG       grid     asmadmin
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME                                      HOST_NAME           
================================================================================
     +ASM1 /u01/app/11.2.0.4/grid                           lunar5              
[grid@lunar5 ~]$ 

确认当前的crs状态都是正常的:

[root@lunar5 ~]# crsctl status res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATADG1.dg
ONLINE ONLINE lunar5
ora.DATADG2.dg
ONLINE ONLINE lunar5
ora.DATADG3.dg
ONLINE ONLINE lunar5
ora.LISTENER.lsnr
ONLINE ONLINE lunar5
ora.TEMPDG.dg
ONLINE ONLINE lunar5
ora.asm
ONLINE ONLINE lunar5 Started
ora.gsd
OFFLINE OFFLINE lunar5
ora.net1.network
ONLINE ONLINE lunar5
ora.ons
ONLINE ONLINE lunar5
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE lunar5
ora.cvu
1 ONLINE ONLINE lunar5
ora.lunar5.vip
1 ONLINE ONLINE lunar5
ora.oc4j
1 ONLINE ONLINE lunar5
ora.scan1.vip
1 ONLINE ONLINE lunar5
[root@lunar5 ~]#
至此老存储的ASM磁盘组已经在新的主机上可以识别了。

发表在 Installation and Deinstall, Linux, Oracle 11.1 & Oracle11.2, RAC | 标签为 , , | 留下评论

2016年1月PSU列表

 

从2016年1月的PSU开始,PSU的命名格式改为后缀带有PSU更新日期的格式,比以前的PSU后面带一个序列号,例如PSU5, PSU6这样的可读性要好很多。

PSU, Bundle Patches

12.1.0.2  —-  JAN2016

 Description

 PSU

    (  Patch ID  )   GI PSU

   (Windows 32bit & 64bit)
(  Patch ID  )  
 Bundle Patch

Jan-16

 21948354 (12.1.0.2.160119)

 22191349 (12.1.0.2.160119)

 22310559 (12.1.0.2.160119)

 OCT2015

 21359755 (12.1.0.2.5)

 21523234 (12.1.0.2.5)

 21821214 (12.1.0.2.10)

 JUL2015

 20831110 (12.1.0.2.4)

 20996835 (12.1.0.2.4)

 21126814 (12.1.0.2.7)

 APR2015

 20299023 (12.1.0.2.3)

 20485724 (12.1.0.2.3)

 20684004 (12.1.0.2.4)

 JAN2015

 19769480 (12.1.0.2.2)

 19954978 (12.1.0.2.2)

 19720843 (12.1.0.2.1)

 OCT2014

 19303936 (12.1.0.2.1)

 19392646 (12.1.0.2.1)

 N/A

PSU, Bundle Patches

 

12.1.0.1  —-  JAN2016
(
12.1.0.1开始,Oracle只提供PSU,不再提供SPU )

 

 Description

 PSU

 GI PSU

  Bundle Patch (Windows64bit)

 Bundle Patch (Windows32bit)

Jan-16

 21951844 (12.1.0.1.160119)

 j22191492 / k22191511(12.1.0.1.160119)

22494866 (12.1.0.2.160119)

 OCT2015

 21352619 (12.10.2.9)

 j21551666 / k21551685(12.1.0.1.9)

21744907 (12.1.0.1.21)

 JUL2015

 20831107 (12.1.0.1.8)

  j20996901 / k20996911(12.1.0.1.8)

21076681  (12.1.0.1.20)

 APR2015

 20299016 (12.1.0.1.7)

  j20485762 / k19971331(12.1.0.1.7)

20558101 (12.1.0.1.18)

 

 JAN2015

 19769486 (12.1.0.1.6)

 j19971324 / k19971331(12.1.0.1.6)

20160748 (12.1.0.1.16)

 OCT2014

 19121550 (12.1.0.1.5)

 j19392372 / k19392451(12.1.0.1.5)

19542943 (12.1.0.1.14)

 JUL2014

 18522516 (12.1.0.1.4)

 j18705901 / k18705972(12.1.0.1.4)

19062327 (12.1.0.1.11)

 APR2014

 18031528 (12.1.0.1.3)

 j18139660 / k18413105  (12.1.0.1.3)

18448604 (12.1.0.1.7)

 JAN2014

 17552800 (12.1.0.1.2)

 17735306 (12.1.0.1.2)

17977915 (12.1.0.1.3)

 OCT2013

 17027533 (12.1.0.1.1)

 17272829 (12.1.0.1.1)

 17363796 (12.1.0.1.1)

 17363795 (12.1.0.1.1)

PSU, SPU(CPU), Bundle Patches

 

11.2.0.4  —-  JAN2016

 

 Description

 PSU

 SPU(CPU)

 GI PSU

 Bundle Patch (Windows 32bit & 64bit)

 JAN2016

 21948347 (11.2.0.4.160119)

 21972320 (11.2.0.4.160119)

 22191577 (11.2.0.4.160119)

 22310544 (11.2.0.4.160119)

 OCT2015

 21352635 (11.2.0.4.8)

 21352646

 21523375 (11.2.0.4.8)

 21821802 (11.2.0.4.20)

 JUL2015

 20760982 (11.2.0.4.7)

 20803583

 20996923 (11.2.0.4.7)

 21469106 (11.2.0.4.18)

 APR2015

 20299013 (11.2.0.4.6)

 20299015

 20485808 (11.2.0.4.6)

 20544696 (11.2.0.4.15)

 JAN2015

 19769489 (11.2.0.4.5)

 19854503

 19955028 (11.2.0.4.5)

 20127071 (11.2.0.4.12)

 OCT2014

 19121551 (11.2.0.4.4)

 19271443

 19380115 (11.2.0.4.4)

 19651773 (11.2.0.4.10)

 JUL2014

 18522509 (11.2.0.4.3)

 18681862

 18706472 (11.2.0.4.3)

 18842982 (11.2.0.4.7)

 APR2014

 18031668 (11.2.0.4.2)

 18139690

 18139609 (11.2.0.4.2)

 18296644 (11.2.0.4.4)

 

 JAN2014

 17478514 (11.2.0.4.1)

 17551709

 N/A

 17987366 (11.2.0.4.1)

PSU, SPU(CPU), Bundle Patches

 

 

11.2.0.3  — JUL2015

 

 

 

 

 

 

 Description

 PSU

 SPU(CPU)

 GI PSU

 Bundle Patch(Windows64bit)

 Bundle Patch(Windows32bit)

 aJUL2015

 20760997 (11.2.0.3.15)

 20803576

 20996944 (11.2.0.3.15)

 21104036

 21104035

 APR2015

 20299017 (11.2.0.3.14)

 20299010

 20485830 (11.2.0.3.14)

 20420395

 20420394

 JAN2015

 19769496 (11.2.0.3.13)

 19854461

 19971343 (11.2.0.3.13)

 20233168

 20233167

 OCT2014

 19121548 (11.2.0.3.12)

 19271438

 19440385 (11.2.0.3.12)

 19618575

 19618574

 JUL2014

 18522512 (11.2.0.3.11)

 18681866

 18706488 (11.2.0.3.11)

 18940194

 18940193

 APR2014

 18031683 (11.2.0.3.10)

 18139695

 18139678 (11.2.0.3.10)

 18372244

 18372243

 JAN2014

 17540582 (11.2.0.3.9)

 17478415

 17735354 (11.2.0.3.9)

 18075406

 17906981

 OCT2013

 16902043 (11.2.0.3.8)

 17082364

 17272731 (11.2.0.3.8)

 17363850

 17363844

 JUL2013

 16619892 (11.2.0.3.7)

 16742095

 16742216 (11.2.0.3.7)

 16803775

 16803774

 APR2013

 16056266 (11.2.0.3.6)

 16294378

 16083653 (11.2.0.3.6)

 16345834

 16345833

 JAN2013

 14727310 (11.2.0.3.5)

 14841409

 14727347 (11.2.0.3.5)

 16042648

 16042647

 OCT2012

 14275605 (11.2.0.3.4)

 14390252

 14275572 (11.2.0.3.4)

 14613223

 14613222

 JUL2012

 13923374 (11.2.0.3.3)

 14038787

 13919095 (11.2.0.3.3)

 14223718

 14223717

 APR2012

 13696216 (11.2.0.3.2)

 13632717

 13696251 (11.2.0.3.2)

 13885389

 13885388

 JAN2012

 13343438 (11.2.0.3.1)

 13466801

 13348650 (11.2.0.3.1)

 13413168

 13413167

PSU, SPU(CPU), Bundle Patches

 

 

11.2.0.2   – OCT2013

 

 Description

 PSU

  SPU(CPU)

 GI PSU

 Bundle Patch(Windows64bit)

 Bundle Patch(Windows32bit)

 aOCT2013

 17082367 (11.2.0.2.12)

 17082375

 17272753 (11.2.0.2.12)

 17363838

 17363837

 JUL2013

 16619893 (11.2.0.2.11)

 16742100

 16742320 (11.2.0.2.11)

 16345852

 16345851

 APR2013

 16056267 (11.2.0.2.10)

 16294412

 16166868 (11.2.0.2.10)

 16345846

 16345845

 JAN2013

 14727315 (11.2.0.2.9)

 14841437

 14841385 (11.2.0.2.9)

 16100399

 16100398

 OCT2012

 14275621 (11.2.0.2.8)

 14390377

 14390437 (11.2.0.2.8)

 14672268

 14672267 

 JUL2012

 13923804 (11.2.0.2.7)

 14038791

 14192201 (11.2.0.2.7)

 14134043

 14134042

 APR2012

 13696224 (11.2.0.2.6)

 13632725

 13696242 (11.2.0.2.6)

 13697074

 13697073

 JAN2012

 13343424 (11.2.0.2.5)

 13343244

 13653086 (11.2.0.2.5)

 13413155

 13413154

 OCT2011

 12827726 (11.2.0.2.4)

 12828071

 12827731 (11.2.0.2.4)

 13038788

 13038787

 JUL2011

 12419331 (11.2.0.2.3)

 12419321

 12419353 (11.2.0.2.3)

 12714463

 12714462

 APR2011

 11724916 (11.2.0.2.2)

 11724984

 12311357 (11.2.0.2.2)

 11896292

 11896290

 JAN2011

 10248523 (11.2.0.2.1)

 N/A

 N/A

 10432053

 10432052

PSU, SPU(CPU), Bundle Patches

11.2.0.1   – JUL2011

 

 Description

 PSU

 CPU

 Bundle Patch (Windows64bit)

 Bundle Patch (Windows32bit)

 aJUL2011

 12419378 (11.2.0.1.6)

 12419278

 12429529

 12429528

 APR2011

 11724930 (11.2.0.1.5)

 11724991

 11731176

 11883240

 JAN2011

 10248516 (11.2.0.1.4)

 10249532

 10432045

 10432044

 OCT2010

 9952216 (11.2.0.1.3)

 9952260

 10100101

 10100100

 JUL2010

 9654983 (11.2.0.1.2)

 9655013

 9736865

 9736864

 APR2010

 9352237 (11.2.0.1.1)

 9369797

 N/A

 N/A

PSU, SPU(CPU), Bundle Patches

 

11.1.0.7   – JUL2015

 

 Description

 PSU

 SPU(CPU)

 Bundle Patch (Windows64bit)

 Bundle Patch (Windows32bit)

bJUL2015

 20761024 (11.1.0.7.24)

 20803573

 21104030

 21104029

bAPR2015

 20299012 (11.1.0.7.23)

 20299020

 20420391

 20420390

bJAN2015

 19769499 (11.1.0.7.22)

 19854433

 20126915

 20126914

bOCT2014

 19152553 (11.1.0.7.21)

 19274522

 19609034

 19609032

bJUL2014

 18522513 (11.1.0.7.20)

 18681875

 18944208

 18944207

bAPR2014

 18031726 (11.1.0.7.19)

 18139703

 18372258

 18372257

bJAN2014

 17465583 (11.1.0.7.18)

 17551415

 17906936

 17906935

bOCT2013

 17082366 (11.1.0.7.17)

 17082374

 17363760

 17363759

bJUL2013

 16619896 (11.1.0.7.16)

 16742110

 16803788

 16803787

bAPR2013

 16056268 (11.1.0.7.15)

 16308394

 16345862

 16345861

bJAN2013

 14739378 (11.1.0.7.14)

 14841452

 15848067

 15848066

 bOCT2012

 14275623 (11.1.0.7.13)

 14390384

 14672313

 14672312

 

 JUL2012

 13923474 (11.1.0.7.12)

 14038803

 14109868

 14109867

 APR2012

 13621679 (11.1.0.7.11)

 13632731

 13715810

 13715809

 JAN2012

 13343461 (11.1.0.7.10)

 13343453

 13460956

 13460955

 OCT2011

 12827740 (11.1.0.7.9)

 12828097

 12914916

 12914915

 JUL2011

 12419384 (11.1.0.7.8)

 12419265

 12695278

 12695277

 APR2011

 11724936 (11.1.0.7.7)

 11724999

 11741170

 11741169

 JAN2011

 10248531 (11.1.0.7.6)

 10249534

 10350788

 10350787

 OCT2010

 9952228  (11.1.0.7.5)

 9952269

 9773825

 9773817

 JUL2010

 9654987 (11.1.0.7.4)

 9655014

 9869912

 9869911

 APR2010

 9352179 (11.1.0.7.3)

 9369783

 9392335

 9392331

 JAN2010

 9209238 (11.1.0.7.2)

 9114072

 9166861

 9166858

 OCT2009

 8833297 (11.1.0.7.1)

 8836375

 8928977

 8928976

 JUL2009

 N/A

 8534338

 8553515

 8553512

 APR2009

 N/A

 8290478

 8343070

 8343061

PSU, SPU(CPU), Bundle Patches

11.1.0.6   — JUL2009

 

 Description

 CPU

 Bundle Patch (Windows64bit)

 Bundle Patch (Windows32bit)

 aJUL2009

 8534378

 8563155

 8563154 

 APR2009

 8290402

 8333657

 8333655 

 JAN2009

 7592335

 7631981

 7631980 

 OCT2008

 7375639

 7378393

 7378392 

 JUL2008

 7150417

 7210197

 7210195 

 APR2008

 6864063

 6867180

 6867178 

PSU, SPU(CPU), Bundle Patches

10.2.0.5  — JUL2013

 

 Description

 PSU

 SPU(CPU)

 Bundle Patch(Windows64bit)

 Bundle Patch(Windows32bit)

 Bundle Patch(WindowsItanium)

bJUL2015

 20299014 (10.2.0.5.19)

 20299021

 20420387

 20420386

 N/A

 APR2015

 N/A

 N/A

 N/A

 N/A

 N/A

 JAN2015

 19769505 (10.2.0.5.18)

 19854436

 20126868

 20126867

 N/A

 OCT2014

 19274523 (10.2.0.5.17)

 19274521

 19618565

 19618563

 N/A

 JUL2014

 18522511 (10.2.0.5.16)

 18681879

 18940198

 18940196

 N/A

 APR2014

 18031728 (10.2.0.5.15)

 18139709

 18372261

 18372259

 N/A

 JAN2014

 17465584 (10.2.0.5.14)

 17551414

 17906974

 17906972

 N/A

 OCT2014

 17082365 (10.2.0.5.13)

 17082371

 N/A

 17363822

 N/A

 

abJUL2013

 16619894 (10.2.0.5.12)

 16742123

 16803782

 16803780

 16803781

 bAPR2013

 16056270 (10.2.0.5.11)

 16270946

 16345857

 16345855

 16345856

 bJAN2013

 14727319 (10.2.0.5.10)

 14841459

 15848062

 15848060

 15848061

 bOCT2012

 14275629 (10.2.0.5.9)

 14390396

 14553358

 14553356

 14553357

 bJUL2012

 13923855 (10.2.0.5.8)

 14038805

 14134053

 14134051

 14134052

 bAPR2012

 13632743 (10.2.0.5.7)

 13632738

 13654815

 13654814

 13870404

 JAN2012

 13343471 (10.2.0.5.6)

 13343467

 b13460968

b13460967

 N/A

 bOCT2011

 12827745 (10.2.0.5.5)

 12828105

 c12914913

 12914911

 N/A

 JUL2011

 12419392 (10.2.0.5.4)

 12419258

 12429524

 12429523

 N/A

 APR2011

 11724962 (10.2.0.5.3)

 11725006

 12328269

 12328268

 N/A

 JAN2011

 10248542 (10.2.0.5.2)

 10249537

 10352673

 10352672

 N/A

 OCT2010

 9952230 (10.2.0.5.1)

 9952270

 10099855

 10058290

 N/A

PSU, SPU(CPU), Bundle Patches
201210月,CPU改名为SPU了,主要针对一些系统漏洞等安全问题出的季度补丁

10.2.0.4   — JUL2013

 

 Description

 PSU

 SPU(CPU)

 Bundle Patch(Windows32bit)

 Bundle Patch(Windows64bit)

 Bundle Patch(WindowsItanium)

 bgJUL2013

 16619897 (10.2.0.4.17)

 16742253

 N/A

 N/A

 N/A

 bgAPR2013

 16056269 (10.2.0.4.16)

 16270931

 N/A

 N/A

 N/A

 bgJAN2013

 14736542 (10.2.0.4.15)

 14841471

 N/A

 N/A

 N/A

bgOCT2012

 14275630 (10.2.0.4.14)

 14390410

 N/A

 N/A

 N/A

bgJUL2012

 13923851 (10.2.0.4.13)

 14038814

 N/A

 N/A

 N/A

 abAPR2012

 12879933 (10.2.0.4.12)

 12879926

 13928775

 13928776

 N/A

 JAN2012

 12879929 (10.2.0.4.11)

 12879912

 b13654060

 N/A

 N/A

 bOCT2011

 12827778 (10.2.0.4.10)

 12828112

 12914908

 12914910

 12914909

 JUL2011

 12419397 (10.2.0.4.9)

 12419249

 12429519

 12429521

 12429520

 APR2011

 11724977 (10.2.0.4.8)

 11725015

 12328501

 12328503

 12328502

 JAN2011

 10248636 (10.2.0.4.7)

 10249540

 10349197

 10349200

 10349198

 OCT2010

 9952234 (10.2.0.4.6)

 9952272

 10084980

 10084982

 10084981

 JUL2010

 9654991 (10.2.0.4.5)

 9655017

 9777076

 9777078

 9777077

 APR2010

 9352164 (10.2.0.4.4)

 9352191

 9393548

 9393550

 9393549

 JAN2010

 9119284 (10.2.0.4.3)

 9119226

 9169457

 9169460

 9169458

 OCT2009

 8833280 (10.2.0.4.2)

 8836308

 8880857

 8880861

 8880858

 JUL2009

 8576156 (10.2.0.4.1)

 8534387

 8559466

 8559467

 8541782

 APR2009

 N/A

 8290506

 8307237

 8307238

 8333678

 JAN2009

 N/A

 7592346

 7584866

 7584867

 N/A

 OCT2008

 N/A

 7375644

 7386320

 7386321

 N/A

 JUL2008

 N/A

 7150470

 7218676

 7218677

 N/A

CPU, Bundle Patches

10.2.0.3  —–  JAN2009

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 Bundle Patch (Windows64bit)

 aJAN2009

 7592354

 7631956

 7631958

 7631957

 OCT2008

 7369190

 7353782

 7353784

 7353785

 JUL2008

 7150622

 7252496

 7252497

 7252498

 APR2008

 6864068

 6867054

 6867055

 6867056

 JAN2008

 6646853

 6637237

 6637238

 6637239

 OCT2007

 6394981

 6430171

 6430173

 6430174

 JUL2007

 6079591

 6116131

 6038242

 6116139

 APR2007

 5901891

 5948242

 5916262

 5948243

 JAN2007

 5881721

 5846376

 5846377

 5846378

CPU, Bundle Patches

10.2.0.2   —–  JAN2009

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

  Bundle Patch (Windows64bit)

 Bundle Patch (WindowsItanium)

 iJAN2009

 7592355

 N/A

 N/A

 N/A

 hOCT2008

 7375660

 N/A

 N/A

 N/A

 hJUL2008

 7154083

 N/A

 N/A

 N/A

 hAPR2008

 6864071

 N/A

 N/A

 N/A

 aJAN2008

 6646850

 N/A

 N/A

 N/A

 fOCT2007

 6394997

 6397028

 6397030

 6397029

 JUL2007

 6079588

 6013105

 6013121

 6013118

 APR2007

 5901881

 5912173

 5912179

 5912176

 JAN2007

 5689957

 5716143

 5699839

 5699824

 OCT2006

 5490848

 5502226

 5500921

 5500894

 JUL2006

 5225799

 5251025

 5251028

 5251026

 APR2006

 5079037

 5140461

 5140567

 5140508

CPU, Bundle Patches

10.2.0.1  —–APR2007

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (Windows64bit)

 Bundle Patch (WindowsItanium)

 APR2007

 5901880

 N/A

 N/A

 N/A

 JAN2007

 5689937

 5695784

 5695786

 5695785

 OCT2006

 5490846

 5500927

 5500954

 5500951

 JUL2006

 5225798

 5239698

 5239701

 5239699

 APR2006

 5049080

 5059238

 5059261

 5059251

 JAN2006

 4751931

 4751539

 4770480

 4751549

CPU, Bundle Patches

10.1.0.5   —-   JAN2012

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

  Bundle Patch (WindowsItanium)

 JAN2012

 13343482

 13413002

 13413003

 OCT2011

 12828135

 12914905

 12914906

 JUL2011

 12419228

 12429517

 12429518

 APR2011

 11725035

 11731119

 11731120

 JAN2011

 N/A

 N/A

 N/A

 OCT2010

 9952279

 10089559

 10089560

 JUL2010

 9655023

 9683651

 9683652

 APR2010

 9352208

 9390288

 9390289

 JAN2010

 9119261

 9187104

 9187105

 OCT2009

 8836540

 8785211

 8785212

 JUL2009

 8534394

 8656224

 8656226

 APR2009

 8290534

 8300356

 8300360

 JAN2009

 7592360

 7486619

 7586049

 OCT2008

 7375686

 7367493

 7367494

 JUL2008

 7154097

 7047034

 7047037

 APR2008

 6864078

 6867107

 6867108

 JAN2008

 6647005

 6637274

 6637275

 OCT2007

 6395024

 6408393

 6408394

 JUL2007

 6079585

 6115804

 6115818

 APR2007

 5901877

 5907304

 5907305

 JAN2007

 5689908

 5716295

 5634747

 OCT2006

 5490845

 5500883

 5500885

 JUL2006

 5225797

 5251148

 5251140

 APR2006

 5049074

 5057606

 5057609

 JAN2006

 4751932

 4882231

 4882236

CPU, Bundle Patches

10.1.0.4 ——APR2007

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 APR2007

 5901876

 5909871

 5909879

 JAN2007

 5689894

 5695771

 5695772

 OCT2006

 5490844

 5500878

 5500880

 JUL2006

 5225796

 5239736

 5239737

 APR2006

 5049067

 5059200

 5059227

 JAN2006

 4751928

 4751259

 4745040

 OCT2005

 4567866

 4579182

 4579188

 JUL2005

 4392423

 4440706

 4404600

 APR2005

 4210374

 4287619

 4287611

 

 

 

 

 

 

 

 

CPU, Bundle Patches

10.1.0.3   — JAN2007

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 JAN2007

 5923277

 N/A

 N/A

 OCT2006

 5566825

 N/A

 N/A

 JUL2006

 5435164

 N/A

 N/A

 APR2006

 5158022

 N/A

 N/A

 JAN2006

 4751926

 4741077

 4741084

 OCT2005

 4567863

 4567518

 4567523

 JUL2005

 4392409

 4389012

 4389014

 APR2005

 4193286

 4269715

 4158888

 JAN2005

 4003062

 4074232

 3990812

CPU, Bundle Patches

10.1.0.2   —- APR2005

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 APR2005

 4193293

 4181849

 4213305

 JUL2005

 4400766

 4388944

 4388948

 JAN2005

 4003051

 4104364

 4083038

CPU, Bundle Patches

9.2.0.8   — JUL2010

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 JUL2010

 9655027

 9683644

 9683645

 APR2010

 9352224

 9390286

 N/A

 JAN2010

 9119275

 9187106

 N/A

 OCT2009

 8836758

 8785185

 8785186

 JUL2009

 8534403

 8427417

 8427418

 APR2009

 8290549

 8300340

 8300346

 JAN2009

 7592365

 7703210

 7703212

 OCT2008

 7375695

 7394394

 7394402

 JUL2008

 7154111

 7047026

 7047029

 APR2008

 6864082

 6867138

 6867139

 JAN2008

 6646842

 6637265

 6637266

 OCT2007

 6395038

 6417013

 6417014

 JUL2007

 6079582

 6130293

 6130295

 APR2007

 5901875

 5916268

 5916275

 JAN2007

 N/A   

 N/A   

 N/A   

 OCT2006

 5490859

 5652380

 5639519

 

 

 

 

 

 

 

 

CPU, Bundle Patches

9.2.0.7 —- JUL2007

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 JUL2007

 6079579

 6146759

 6146748

 APR2007

 5901872

 5907274

 5907275

 JAN2007

 5689875

 5654905

 5654909

 OCT2006

 5490841

 5500873

 5500874

 JUL2006

 5225794

 5250980

 5250981

 APR2006

 5049060

 5064365

 5064364

 JAN2006

 4751923

 4751528

 4741074

 OCT2005

 4567854

 4579590

 4579599

 JUL2005

 4547566

 N/A   

 N/A   

CPU, Bundle Patches

9.2.0.6   – OCT2006

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 OCT2006

 5490840

 5500865

 5500871

 JUL2006

 5225793

 5239794

 5239793

 APR2006

 5049051

 5059614

 5059615

 JAN2006

 4751921

 4751261

 4751262

 OCT2005

 4567846

 4579093

 4579097

 JUL2005

 4392392

 4445852

 4401917

 APR2005

 4193295

 4269928

 4213298

CPU, Bundle Patches

9.2.0.5   – OCT2006

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 OCT2006

 5689708

 N/A

 N/A

 JUL2006

 5435138

 N/A

 N/A

 APR2006

 5219762

 N/A

 N/A

 OCT2005

 4560421

 N/A

 N/A

 JUL2005

 4392256

 4387563

 4391819

 APR2005

 4193299

 4195791

 4214192

 JAN2005

 4003006

 4104374

 3990809

CPU, Bundle Patches

9.2.0.4   – JAN2005

 

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 Bundle Patch (WindowsItanium)

 JAN2005

 4002994

 4104369

 4083202

 

CPU, Bundle Patches

 

8.1.7.4   – JAN2007

 Description

 CPU (Unix/Linux)

 Bundle Patch (Windows32bit)

 JAN2007

 5689799

 5686514

 OCT2006

 5490835

 5496067

 JUL2006

 5225788

 5236412

 APR2006

 5045247

 5057601

 JAN2006

 4751906

 4751570

 OCT2005

 4560405

 4554818

 JUL2005

 4392446

 4437058

 APR2005

 4193312

 4180163

 JAN2005

 4002909

 3921893

 

 

 

 

PSU, Bundle Patches
12.1.0.2  —-  JAN2016
 Description  PSU     (  Patch ID  )   GI PSU     (Windows 32bit & 64bit)
(  Patch ID  )  
 Bundle Patch
Jan-16  21948354 (12.1.0.2.160119)  22191349 (12.1.0.2.160119)  22310559 (12.1.0.2.160119)
 OCT2015  21359755 (12.1.0.2.5)  21523234 (12.1.0.2.5)  21821214 (12.1.0.2.10)
 JUL2015  20831110 (12.1.0.2.4)  20996835 (12.1.0.2.4)  21126814 (12.1.0.2.7)
 APR2015  20299023 (12.1.0.2.3)  20485724 (12.1.0.2.3)  20684004 (12.1.0.2.4)
 JAN2015  19769480 (12.1.0.2.2)  19954978 (12.1.0.2.2)  19720843 (12.1.0.2.1)
 OCT2014  19303936 (12.1.0.2.1)  19392646 (12.1.0.2.1)  N/A
发表在 Installation and Deinstall, SAP | 标签为 , , | 留下评论

单实例数据库转换为RAC数据库–使用rconfig转换

测试目的: 单实例数据库转换为RAC数据库
测试环境:Oracle 11.2.0.3
测试方法:使用rconfig的方式

.
$su – oracle
cp $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC_AdminManaged.xml lunar.xml
转换过程超级简单,如果你观察操作过程的rconfig日志会发现,这个方法的本质是:
1,后台自动配置RAC的CRS配置信息(CRS的,spfile的,口令文件等等)
2,自动调用rman进行backup of copy类型的数据库备份和恢复
3,自动添加RAC中第二个节点的thread 2的redo logfile group
.
具体操作如下:

[oracle@rac1 test]$ rconfig lunar.xml
Converting Database "lunar" to Cluster Database. Target Oracle Home: /u01/app/oracle/product/11.2.0/db_1. Database Role: PRIMARY.
Setting Data Files and Control Files
Adding Database Instances
Adding Redo Logs
Enabling threads for all Database Instances
Setting TEMP tablespace
Adding UNDO tablespaces
Adding Trace files
Setting Fast Recovery Area
Updating Oratab
Creating Password file(s)
Configuring Listeners
Configuring related CRS resources
Starting Cluster Database
<?xml version="1.0" ?>
<RConfig version="1.1" >
<ConvertToRAC>
    <Convert>
      <Response>
        <Result code="0" >
          Operation Succeeded
        </Result>
      </Response>
      <ReturnValue type="object">
<Oracle_Home>
         /u01/app/oracle/product/11.2.0/db_1
       </Oracle_Home>
       <Database type="ADMIN_MANAGED"  >
         <InstanceList>
           <Instance SID="lunar1" Node="rac1"  >
           </Instance>
           <Instance SID="lunar2" Node="rac2"  >
           </Instance>
         </InstanceList>
       </Database>     </ReturnValue>
    </Convert>
  </ConvertToRAC></RConfig>
[oracle@rac1 test]$ 

执行期间可以观察日志,看看rconfig怎么完成转换的,我个人觉得这个很有意思。
日志位置:$ORACLE_BASE/cfgtoollogs/rconfig_时间戳。
.
检查转换后的CRS状态:

[root@rac2 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMDATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER_DG.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.net2.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  INTERMEDIATE rac1                     CHECK TIMED OUT     
               ONLINE  INTERMEDIATE rac2                     CHECK TIMED OUT     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.lunar.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.r-dg1-vip.vip
      1        ONLINE  ONLINE       rac1                                         
ora.r-dg2-vip.vip
      1        ONLINE  ONLINE       rac2                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.racdb.db
      1        OFFLINE OFFLINE                               Instance Shutdown   
      2        OFFLINE OFFLINE                               Instance Shutdown   
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                                         
[root@rac2 ~]# 

常见问题1:
报错信息:The cluster is not configured or is not running on node: RAC1

[oracle@RAC1 test]$ rconfig lunar.xml
<?xml version="1.0" ?>
<RConfig version="1.1" >
<ConvertToRAC>
    <Convert>
      <Response>
        <Result code="1" >
          Operation Failed
        </Result>
       <ErrorDetails>
             The cluster is not configured or is not running on node: RAC1
       </ErrorDetails>
      </Response>
    </Convert>
  </ConvertToRAC></RConfig>
[oracle@RAC1 test]$ 

解决方法:首先需要把这个单机数据库加入CRS

.
常见问题2:
报错信息:Node that has single instance database running should be specified in the NodeList. Please specify node RAC1 as part of NodeList.

[oracle@RAC1 test]$ rconfig lunar.xml
<?xml version="1.0" ?>
<RConfig version="1.1" >
<ConvertToRAC>
    <Convert>
      <Response>
        <Result code="1" >
          Got Exception
        </Result>
       <ErrorDetails>
             Node that has single instance database running should be specified in the NodeList. Please specify node RAC1 as part of NodeList.
Operation Failed. Refer logs at /u01/app/oracle/cfgtoollogs/rconfig/rconfig_06_24_13_07_46_07.log for more details.

       </ErrorDetails>
      </Response>
    </Convert>
  </ConvertToRAC></RConfig>
[oracle@RAC1 test]$ 

解决方法:修改数据库参数,检查两个实例的参数配置正常,并使用spfile重启数据后,再次测试
例如:

[oracle@rac1 test]$ cat spfile.lunar.bak
*.audit_file_dest='/u01/app/oracle/admin/lunar/adump'
*.audit_trail='none'
*.compatible='11.2.0.3.0'
*.control_files='+ASMDATA/lunar/control01.ctl','+DATA/lunar/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='lunar'
*.db_recovery_file_dest='+ASMDATA'
*.db_recovery_file_dest_size=10485760000
*.deferred_segment_creation=FALSE
*.diagnostic_dest='/u01/app/oracle'
*.open_cursors=300
*.pga_aggregate_target=153092096
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=629145600
lunar1.undo_tablespace='UNDOTBS1'
#lunar2.undo_tablespace='UNDOTBS2'
*.db_create_file_dest='+ASMDATA'
*.db_create_online_log_dest_1='+DATA'
lunar1.instance_number=1
lunar2.instance_number=2
*.cluster_database=true
*.cluster_database_instances=2
*.undo_management=AUTO
lunar1.instance_name=lunar1
lunar2.instance_name=lunar2
lunar1.thread=1
lunar2.thread=2
lunar1.local_listener=LISTENER_RAC1
lunra2.local_listener=LISTENER_RAC2

[oracle@rac1 test]$ 

常见问题3:
报错信息:There is no return value for this step

[oracle@rac1 test]$ rconfig lunar.xml
<?xml version="1.0" ?>
<RConfig version="1.1" >
<ConvertToRAC>
    <Convert>
      <Response>
        <Result code="0" >
          Operation Succeeded
        </Result>
      </Response>
      <ReturnValue type="object">
There is no return value for this step     </ReturnValue>
    </Convert>
  </ConvertToRAC></RConfig>
[oracle@rac1 test]$ 

解决方法:修改 Convert verify=”YES” ,然后进行单机到RAC的转换。

整个过程的执行日志:

[main] [ 2013-06-24 08:11:32.949 CST ] [RConfig.<init>:88]  java.class.path=/u01/app/oracle/product/11.2.0/db_1/jdk/jre/lib/rt.jar:/u01/app/oracle/product/11.2.0/db_1/jdk/jre/lib/i18n.jar:/u01/app/oracle/product/11.2.0/db_1/assistants/jlib/rconfigPatch.jar:/u01/app/oracle/product/11.2.0/db_1/assistants/jlib/rconfig.jar:/u01/app/oracle/product/11.2.0/db_1/assistants/jlib/assistantsCommon.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/ewt3.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/ewtcompat-3_3_15.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/share.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/help4.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/jewt4.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/oracle_ice5.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/kodiak.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/swingall-1_1_1.jar:/u01/app/oracle/product/11.2.0/db_1/lib/xmlparserv2.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/orai18n.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/orai18n-mapping.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/orai18n-utility.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/orai18n-collation.jar::/u01/app/oracle/product/11.2.0/db_1/jlib/ldapjclnt11.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/netcfg.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/ojmisc.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/oraclepki103.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/opm.jar:/u01/app/oracle/product/11.2.0/db_1/jdbc/lib/ojdbc5.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/srvm.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/srvmhas.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/srvmasm.jar:/u01/app/oracle/product/11.2.0/db_1/classes:/u01/app/oracle/product/11.2.0/db_1/jlib/oemlt.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/emca.jar:/u01/app/oracle/product/11.2.0/db_1/sysman/jlib/emCORE.jar:/u01/app/oracle/product/11.2.0/db_1/oc4j/j2ee/home/oc4j.jar:/u01/app/oracle/product/11.2.0/db_1/oc4j/j2ee/home/db_oc4j_deploy.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/emConfigInstall.jar:/u01/app/oracle/product/11.2.0/db_1/oui/jlib/OraInstaller.jar:/u01/app/oracle/product/11.2.0/db_1/oui/jlib/OraPrereq.jar:/u01/app/oracle/product/11.2.0/db_1/inventory/prereqs/oui/OraPrereqChecks.jar:/u01/app/oracle/product/11.2.0/db_1/oui/jlib/OraPrereqChecks.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/cvu.jar:/u01/app/oracle/product/11.2.0/db_1/jdk/jre/lib/ext/sunjce_provider.jar:/u01/app/oracle/product/11.2.0/db_1/assistants/asmca/jlib/asmca.jar:/u01/app/oracle/product/11.2.0/db_1/jlib/gns.jar
[main] [ 2013-06-24 08:11:32.951 CST ] [RConfig.<init>:89]  java.home=/u01/app/oracle/product/11.2.0/db_1/jdk/jre
[main] [ 2013-06-24 08:11:32.952 CST ] [RConfig.<init>:90]  rconfig.user.dir=/home/oracle/test
[main] [ 2013-06-24 08:11:33.381 CST ] [ConfigParser.createURL:182]  PATH=/home/oracle/test/oracle/sysman/assistants/rconfig/parser/rconfig.xsd
[main] [ 2013-06-24 08:11:33.382 CST ] [ConfigParser.createURL:197]  URL=file:/home/oracle/test/oracle/sysman/assistants/rconfig/parser/rconfig.xsd
[main] [ 2013-06-24 08:11:33.385 CST ] [ConfigParser.parse:104]  Found the parser at url file:/home/oracle/test/oracle/sysman/assistants/rconfig/parser/rconfig.xsd
[main] [ 2013-06-24 08:11:33.586 CST ] [ConfigParser.parse:136]  Input File is not null, parsing input file xml
[main] [ 2013-06-24 08:11:33.587 CST ] [ConfigParser.parse:143]  Input xml Path : lunar.xml
[main] [ 2013-06-24 08:11:33.588 CST ] [ConfigParser.parse:147]  validating the input against rconfig.xsd:
[main] [ 2013-06-24 08:11:33.596 CST ] [ConfigurationHandler.startElement:114]  localName=RConfig
[main] [ 2013-06-24 08:11:33.598 CST ] [ConfigurationHandler.startElement:114]  localName=ConvertToRAC
[main] [ 2013-06-24 08:11:33.611 CST ] [ConfigurationHandler.processElement:75]  RACConversion request created
[main] [ 2013-06-24 08:11:33.612 CST ] [ConfigurationHandler.startElement:114]  localName=Convert