阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Linux平台 Oracle 10gR2(10.2.0.5)RAC安装

219次阅读
没有评论

共计 38584 个字符,预计需要花费 97 分钟才能阅读完成。

Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part1:准备工作

环境:OEL 5.7 + Oracle 10.2.0.5 RAC

1. 实施前准备工作

  • 1.1 服务器安装操作系统
  • 1.2 Oracle 安装介质
  • 1.3 共享存储规划
  • 1.4 网络规划分配

2. 主机配置

  • 2.1 使用 yum 安装 oracle-validated 包来简化主机配置的部分工作
  • 2.2 共享存储配置
  • 2.3 配置 /etc/hosts
  • 2.4 配置 Oracle 用户等价性
  • 2.5 创建软件目录
  • 2.6 配置用户环境变量
  • 2.7 关闭各节点主机防火墙和 SELinux
  • 2.8 各节点系统时间校对

Linux 平台 Oracle 10gR2 RAC 安装指导:
Part1:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part1:准备工作
Part2:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part2:clusterware 安装和升级
Part3:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part3:db 安装和升级

1. 实施前准备工作

1.1 服务器安装操作系统

配置完全相同的两台服务器,安装相同版本的 Linux 操作系统。留存系统光盘或者镜像文件。
我这里是 OEL5.7,系统目录大小均一致。对应 OEL5.7 的系统镜像文件放在服务器上,供后面配置本地 yum 使用。

1.2 Oracle 安装介质

Oracle 10.2.0.1 版本的 clusterware 和 db,以及 10.2.0.5 的升级包。

 

-rwxr-xr-x 1 root root 302M 12 月 24 13:07 10201_clusterware_linux_x86_64.cpio.gz
-rwxr-xr-x 1 root root 724M 12 月 24 13:08 10201_database_linux_x86_64.cpio.gz
-rwxr-xr-x 1 root root 1.2G 12 月 24 13:10 p8202632_10205_Linux-x86-64.zip

这个用 MOS 账号自己去 support.oracle.com 下载,然后只需要上传到节点 1 即可。

1.3 共享存储规划

从存储中划分出两台主机可以同时看到的共享 LUN。
我这里自己的实验环境是使用 openfiler 模拟出共享 LUN:
5 个 100M 大小 LUN;用于 OCR,votedisk;
3 个 10G 大小 LUN;用于 DATA;
2 个 5G 大小 LUN;用于 FRA。

openfiler 使用可参考:Openfiler 配置 RAC 共享存储

1.4 网络规划分配

公有网络 以及 私有网络。
公有网络:物理网卡 eth0(public IP,VIP),需要 4 个 IP 地址。
私有网络:物理网卡 eth1(private IP),需要 2 个内部 IP 地址。

实际生产环境一般服务器都至少有 4 块网卡。建议是两两 bonding 后分别作为公有网络和私有网络。

2. 主机配置

2.1 使用 yum 安装 oracle-validated 包来简化主机配置的部分工作

由于系统环境是 OEL5.7,可以简化依赖包安装、内核参数调整,用户和组创建等工作,可参考:OEL 上使用 yum install oracle-validated 简化主机配置工作

2.2 共享存储配置:

我这里 openfiler 所在主机的 IP 地址为 192.168.1.12。归划的 10 块 LUN 全部映射到 iqn.2006-01.com.openfiler:rac10g 上。

[root@oradb28 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.12
192.168.1.12:3260,1 iqn.2006-01.com.openfiler:rac10g

#手工登录 iscsi 目标
iscsiadm -m node -T iqn.2006-01.com.openfiler:rac10g -p 192.168.1.12 -l

#配置自动登录
iscsiadm -m node -T iqn.2006-01.com.openfiler:rac10g -p 192.168.1.12 --op update -n node.startup -v automatic

#重启 iscsi 服务
service iscsi stop
service iscsi start

注意:安装 10g RAC,要确保共享设备上划分的 LUN 要在所有节点上被识别为相同设备名称。

[root@oradb27 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,   0 Jan  2 22:40 /dev/sda
brw-r----- 1 root disk 8,  16 Jan  2 22:40 /dev/sdb
brw-r----- 1 root disk 8,  32 Jan  2 22:40 /dev/sdc
brw-r----- 1 root disk 8,  48 Jan  2 22:40 /dev/sdd
brw-r----- 1 root disk 8,  64 Jan  2 22:40 /dev/sde
brw-r----- 1 root disk 8,  80 Jan  2 22:40 /dev/sdf
brw-r----- 1 root disk 8,  96 Jan  2 22:40 /dev/sdg
brw-r----- 1 root disk 8, 112 Jan  2 22:40 /dev/sdh
brw-r----- 1 root disk 8, 128 Jan  2 22:40 /dev/sdi
brw-r----- 1 root disk 8, 144 Jan  2 22:40 /dev/sdj

[root@oradb28 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,   0 Jan  2 22:41 /dev/sda
brw-r----- 1 root disk 8,  16 Jan  2 22:41 /dev/sdb
brw-r----- 1 root disk 8,  32 Jan  2 22:41 /dev/sdc
brw-r----- 1 root disk 8,  48 Jan  2 22:41 /dev/sdd
brw-r----- 1 root disk 8,  64 Jan  2 22:41 /dev/sde
brw-r----- 1 root disk 8,  80 Jan  2 22:41 /dev/sdf
brw-r----- 1 root disk 8,  96 Jan  2 22:41 /dev/sdg
brw-r----- 1 root disk 8, 112 Jan  2 22:41 /dev/sdh
brw-r----- 1 root disk 8, 128 Jan  2 22:41 /dev/sdi
brw-r----- 1 root disk 8, 144 Jan  2 22:41 /dev/sdj

其中 sda,sdb,sdc,sdd,sde 是 100M 大小的 LUN,我们分别将这 5 个 LUN 各分成一个区(我实验中发现如果不分区直接绑成裸设备,在安装 clusterware 后执行 root.sh 时会报错:“Failed to upgrade Oracle Cluster Registry configuration”,分区后绑定分区成裸设备,发现可以正常执行通过)

[root@oradb27 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,  0 Jan  3 09:36 /dev/sda
brw-r----- 1 root disk 8,  1 Jan  3 09:36 /dev/sda1
brw-r----- 1 root disk 8, 16 Jan  3 09:36 /dev/sdb
brw-r----- 1 root disk 8, 17 Jan  3 09:36 /dev/sdb1
brw-r----- 1 root disk 8, 32 Jan  3 09:36 /dev/sdc
brw-r----- 1 root disk 8, 33 Jan  3 09:36 /dev/sdc1
brw-r----- 1 root disk 8, 48 Jan  3 09:36 /dev/sdd
brw-r----- 1 root disk 8, 49 Jan  3 09:36 /dev/sdd1
brw-r----- 1 root disk 8, 64 Jan  3 09:36 /dev/sde
brw-r----- 1 root disk 8, 65 Jan  3 09:36 /dev/sde1

[root@oradb28 crshome_1]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,  0 Jan  3 09:36 /dev/sda
brw-r----- 1 root disk 8,  1 Jan  3 09:36 /dev/sda1
brw-r----- 1 root disk 8, 16 Jan  3 09:36 /dev/sdb
brw-r----- 1 root disk 8, 17 Jan  3 09:36 /dev/sdb1
brw-r----- 1 root disk 8, 32 Jan  3 09:36 /dev/sdc
brw-r----- 1 root disk 8, 33 Jan  3 09:36 /dev/sdc1
brw-r----- 1 root disk 8, 48 Jan  3 09:36 /dev/sdd
brw-r----- 1 root disk 8, 49 Jan  3 09:36 /dev/sdd1
brw-r----- 1 root disk 8, 64 Jan  3 09:36 /dev/sde
brw-r----- 1 root disk 8, 65 Jan  3 09:36 /dev/sde1

1)使用 udev 绑定 raw devices,供 ocr 和 voting disk 使用

编辑配置文件并追加以下内容:

# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sda1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="raw*", OWNER=="oracle", GROUP=="oinstall", MODE=="0660"

启动 start_udev:

[root@oradb27 rules.d]# start_udev
Starting udev:                                             [OK]
[root@oradb27 rules.d]# ls -l /dev/raw*
crw-rw---- 1 oracle oinstall 162, 0 Jan  2 22:37 /dev/rawctl

/dev/raw:
total 0
crw-rw---- 1 oracle oinstall 162, 1 Jan  2 23:11 raw1
crw-rw---- 1 oracle oinstall 162, 2 Jan  2 23:11 raw2
crw-rw---- 1 oracle oinstall 162, 3 Jan  2 23:11 raw3
crw-rw---- 1 oracle oinstall 162, 4 Jan  2 23:11 raw4
crw-rw---- 1 oracle oinstall 162, 5 Jan  2 23:11 raw5
[root@oradb27 rules.d]# 

配置文件 60-raw.rules 传到节点 2:

[root@oradb27 rules.d]# scp /etc/udev/rules.d/60-raw.rules oradb28:/etc/udev/rules.d/

在节点 2 启动 start_udev。

注意:如果安装中发现 raw 曾被使用过,可能需要 dd 清除头部信息;

dd if=/dev/zero of=/dev/raw/raw1 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw2 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw3 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw4 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw5 bs=1048576 count=10

2)使用 udev 绑定 asm devices,供 data 磁盘组和 fra 磁盘组使用

for i in f g h i j;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\""
done

操作过程如下:

[root@oradb27 rules.d]# for i in f g h i j;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c455279366c36366a2d5a4243752d58394a33", NAME="asm-diskf", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c45525453586652542d67786f682d594c4a66", NAME="asm-diskg", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c455232586c3151572d62504e412d3343547a", NAME="asm-diskh", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c45527061334151682d4666656d2d5a6a4c67", NAME="asm-diski", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c4552495649757a352d675251532d47744353", NAME="asm-diskj", OWNER="oracle", GROUP="oinstall", MODE="0660"
[root@oradb27 rules.d]# 

vi 
[root@oradb27 rules.d]# vi 99-oracle-asmdevices.rules

[root@oradb27 rules.d]# start_udev
Starting udev:                                             [OK]
[root@oradb27 rules.d]# ls -lh /dev/asm*
brw-rw---- 1 oracle oinstall 8,  80 Jan  2 23:18 /dev/asm-diskf
brw-rw---- 1 oracle oinstall 8,  96 Jan  2 23:18 /dev/asm-diskg
brw-rw---- 1 oracle oinstall 8, 112 Jan  2 23:18 /dev/asm-diskh
brw-rw---- 1 oracle oinstall 8, 128 Jan  2 23:18 /dev/asm-diski
brw-rw---- 1 oracle oinstall 8, 144 Jan  2 23:18 /dev/asm-diskj

# 拷贝配置文件 99-oracle-asmdevices.rules 到节点 2,启动 start_udev
[root@oradb27 rules.d]# scp 99-oracle-asmdevices.rules oradb28:/etc/udev/rules.d/99-oracle-asmdevices.rules

[root@oradb28 ~]# start_udev
Starting udev:                                             [OK]
[root@oradb28 ~]# ls -l /dev/asm*
brw-rw---- 1 oracle oinstall 8,  80 Jan  2 23:20 /dev/asm-diskf
brw-rw---- 1 oracle oinstall 8,  96 Jan  2 23:20 /dev/asm-diskg
brw-rw---- 1 oracle oinstall 8, 112 Jan  2 23:20 /dev/asm-diskh
brw-rw---- 1 oracle oinstall 8, 128 Jan  2 23:20 /dev/asm-diski
brw-rw---- 1 oracle oinstall 8, 144 Jan  2 23:20 /dev/asm-diskj

2.3 配置 /etc/hosts

按照规划配置节点 1 的 /etc/hosts 内容

#public ip
192.168.1.27  oradb27
192.168.1.28  oradb28
#private ip
10.10.10.27   oradb27-priv
10.10.10.28   oradb28-priv
#virtual ip
192.168.1.57  oradb27-vip
192.168.1.58  oradb28-vip

然后 scp 拷贝 /etc/hosts 配置文件到节点 2:

scp /etc/hosts oradb28:/etc/

2.4 配置 Oracle 用户等价性

# 所有节点执行:
ssh-keygen -q -t rsa  -N "" -f  ~/.ssh/id_rsa

# 节点 1 执行:
ssh 192.168.1.27 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh 192.168.1.28 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

chmod 600 ~/.ssh/authorized_keys

scp ~/.ssh/authorized_keys  192.168.1.28:~/.ssh/

# 所有节点执行验证 ssh 等价性:
ssh 192.168.1.27 date;ssh 192.168.1.28 date;
ssh oradb27 date;ssh oradb28 date;
ssh oradb27-priv date;ssh oradb28-priv date;

对配置用户 ssh 互信步骤如有疑问可以参考:记录一则 Linux SSH 的互信配置过程

2.5 创建软件目录

mkdir -p /u01/app/oracle/product/10.2.0.5/dbhome_1
mkdir -p /u01/app/oracle/product/10.2.0.5/crshome_1
chown -R oracle:oinstall /u01/app

2.6 配置用户环境变量

节点 1: vi /home/oracle/.bash_profile

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.5/dbhome_1
export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0.5/crshome_1
export ORACLE_SID=jyrac1
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
alias sql="sqlplus \"/as sysdba\""

节点 2:vi /home/oracle/.bash_profile

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.5/dbhome_1
export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0.5/crshome_1
export ORACLE_SID=jyrac2
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
alias sql="sqlplus \"/as sysdba\""

2.7 关闭各节点主机防火墙和 SELinux

各节点检查、关闭防火墙和 SE Linux:

service iptables status
service iptables stop
chkconfig iptables off

getenforce
setenforce 0
vi /etc/selinux/config
 修改:Enforcing -> disabled

2.8 各节点系统时间校对

service ntpd stop
date 
#如果时间有问题,就按下面的语法进行设定
date 072310472015 // 设定日期为2015-07-23 10:47:00
hwclock -w
hwclock -r

至此,主机配置的相关准备工作已经完成。

更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2017-01/139158p2.htm

Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part2:clusterware 安装和升级
环境:OEL 5.7 + Oracle 10.2.0.5 RAC

3. 安装 Clusterware

  • 3.1 解压 clusterware 安装介质
  • 3.2 开始安装 clusterware
  • 3.3 root 用户按提示执行脚本
  • 3.4 vipca 创建(可能不需要)

4. 升级 Clusterware

  • 4.1 解压 Patchset 包
  • 4.2 开始升级 clusterware
  • 4.3 root 用户按提示执行脚本

Linux 平台 Oracle 10gR2 RAC 安装指导:
Part1:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part1:准备工作
Part2:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part2:clusterware 安装和升级
Part3:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part3:db 安装和升级

3. 安装 Clusterware

3.1 解压 clusterware 安装介质

将存放 Oracle 相关安装介质目录赋权给 Oracle 用户:

[root@oradb27 media]# chown -R oracle:oinstall /u01/media/

oracle 用户解压安装介质:

[oracle@oradb27 media]$ gunzip 10201_clusterware_linux_x86_64.cpio.gz 
[oracle@oradb27 media]$ cpio -idmv < 10201_clusterware_linux_x86_64.cpio 

执行预检查:

[root@oradb27 media]# /u01/media/clusterware/rootpre/rootpre.sh 
No OraCM running 

3.2 开始安装 clusterware

使用 Xmanager(MAC 系统是 XQuartz)开始安装 clusterware:

[root@oradb27 media]# cd /u01/media/clusterware/install
[root@oradb27 install]# vi oraparam.ini 
修改下面这里,[Certified Versions]
Linux=RedHat-3,SUSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2
添加 redhat-5,即:[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5

[root@oradb27 clusterware]# pwd
/u01/media/clusterware
[root@oradb27 clusterware]# ./runInstaller 

3.3 root 用户按提示执行脚本

节点 1 执行:

# 开始没有对 /dev/sd{a,b,c,d,e}, 这 5 个 LUN 分区
[root@oradb27 rules.d]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 rules.d]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Failed to upgrade Oracle Cluster Registry configuration

# 对 /dev/sd{a,b,c,d,e}, 这 5 个 LUN 分别分区 sd{a,b,c,d,e}1 后执行成功
[root@oradb27 10.2.0.5]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 10.2.0.5]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        oradb27
CSS is inactive on these nodes.
        oradb28
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@oradb27 10.2.0.5]# 

官方对这个错误的解决方法可参考 MOS 文档:Executing root.sh errors with “Failed To Upgrade Oracle Cluster Registry Configuration” (文档 ID 466673.1)

Before running the root.sh on the first node in the cluster do the following:

  1. Download Patch:4679769 from Metalink (contains a patched version of clsfmt.bin).
  2. Do the following steps as stated in the patch README to fix the problem:
    Note: clsfmt.bin need only be replaced on the 1st node of the cluster

节点 2 执行:

[root@oradb28 crshome_1]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete

[root@oradb28 crshome_1]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        oradb27
        oradb28
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0.5/crshome_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
[root@oradb28 crshome_1]# 

上面的这个报错信息,需要在 /u01/app/oracle/product/10.2.0.5/crshome_1/bin 下修改 vipca 和 srvctl 文件内容:

[root@oradb28 bin]# ls -l vipca 
-rwxr-xr-x 1 oracle oinstall 5343 Jan  3 09:44 vipca
[root@oradb28 bin]# ls -l srvctl 
-rwxr-xr-x 1 oracle oinstall 5828 Jan  3 09:44 srvctl
加入
unset LD_ASSUME_KERNEL

重新运行 /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)

没有再报错,但是也没有成功显示进行 vipca 创建。

3.4 vipca 创建(可能不需要)

如果上面 3.3 步骤正常执行成功了 vipca,那么此步骤不再需要;
如果上面 3.3 步骤没有正常执行成功 vipca,那么就需要手工在最后一个节点手工 vipca 创建:
这里手工执行 vipca 还遇到一个错误如下:

[root@oradb28 bin]# ./vipca 
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

查看网络层相关的信息,并手工注册信息:

[root@oradb28 bin]# ./oifcfg getif
[root@oradb28 bin]# ./oifcfg iflist
eth0  192.168.1.0
eth1  10.10.10.0
[root@oradb28 bin]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 06:CB:72:01:07:88  
          inet addr:192.168.1.28  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1018747 errors:0 dropped:0 overruns:0 frame:0
          TX packets:542075 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2196870487 (2.0 GiB)  TX bytes:43268497 (41.2 MiB)

eth1      Link encap:Ethernet  HWaddr 22:1A:5A:DE:C1:21  
          inet addr:10.10.10.28  Bcast:10.10.10.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5343 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3656 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1315035 (1.2 MiB)  TX bytes:1219689 (1.1 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2193 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2193 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:65167 (63.6 KiB)  TX bytes:65167 (63.6 KiB)

[root@oradb28 bin]# ./oifcfg -h
PRIF-9: incorrect usage

Name:
        oifcfg - Oracle Interface Configuration Tool.

Usage:  oifcfg iflist [-p [-n]]
        oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
        oifcfg getif [-node <nodename> | -global] [-if <if_name>[/<subnet>] [-type <if_type>] ]
        oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
        oifcfg [-help]

        <nodename> - name of the host, as known to a communications network
        <if_name>  - name by which the interface is configured in the system
        <subnet>   - subnet address of the interface
        <if_type>  - type of the interface {cluster_interconnect | public | storage}

[root@oradb28 bin]# ./oifcfg setif -global eth0/192.168.1.0:public
[root@oradb28 bin]# ./oifcfg getif
eth0  192.168.1.0  global  public
[root@oradb28 bin]# 
[root@oradb28 bin]# 
[root@oradb28 bin]# 
[root@oradb28 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
[root@oradb28 bin]# ./oifcfg getif
eth0  192.168.1.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
[root@oradb28 bin]# 

当 oifcfg getif 正常获取信息后,再次运行 VIPCA 创建成功。

然后再继续回到安装 clusterware 的界面继续也显示成功。
此时查看集群的状态应该都是正常的:

[oracle@oradb27 bin]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb27 bin]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb27 bin]$ 

[oracle@oradb28 ~]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb28 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb28 ~]$ 

4. 升级 Clusterware

4.1 解压 Patchset 包

[root@oradb27 media]$ unzip p8202632_10205_Linux-x86-64.zip
[root@oradb27 media]$ cd Disk1/
[root@oradb27 Disk1]$ pwd
/u01/media/Disk1

4.2 开始升级 clusterware

使用 xquartz 开始升级 clusterware:
ssh -X oracle@192.168.1.27

[root@oradb27 Disk1]$ ./runInstaller 

升级过程中,在预安装检查时,有一个参数设置不符合检查要求,如下:

Checking for rmem_default=1048576; found rmem_default=262144.   Failed <<<<

可以调整 /etc/sysctl.conf 配置文件,然后执行 sysctl - p 生效。

4.3 root 用户按提示执行脚本

    1.  Log in as the root user.
    2.  As the root user, perform the following tasks:

        a.  Shutdown the CRS daemons by issuing the following command:
                /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
        b.  Run the shell script located at:
                /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
            This script will automatically start the CRS daemons on the
            patched node upon completion.

    3.  After completing this procedure, proceed to the next node and repeat.

即分别执行:

/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh

节点 1 执行:

[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources 
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb27 bin]# 

节点 2 执行:

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources 
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb28 bin]# 

升级成功,确认 crs 版本为 10.2.0.5,集群状态正常:

[oracle@oradb27 bin]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb28 ~]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb27 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb27 ~]$ 

至此,oracle clusterware 安装(10.2.0.1)和升级(10.2.0.5)已完成。

Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part1:准备工作

环境:OEL 5.7 + Oracle 10.2.0.5 RAC

1. 实施前准备工作

  • 1.1 服务器安装操作系统
  • 1.2 Oracle 安装介质
  • 1.3 共享存储规划
  • 1.4 网络规划分配

2. 主机配置

  • 2.1 使用 yum 安装 oracle-validated 包来简化主机配置的部分工作
  • 2.2 共享存储配置
  • 2.3 配置 /etc/hosts
  • 2.4 配置 Oracle 用户等价性
  • 2.5 创建软件目录
  • 2.6 配置用户环境变量
  • 2.7 关闭各节点主机防火墙和 SELinux
  • 2.8 各节点系统时间校对

Linux 平台 Oracle 10gR2 RAC 安装指导:
Part1:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part1:准备工作
Part2:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part2:clusterware 安装和升级
Part3:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part3:db 安装和升级

1. 实施前准备工作

1.1 服务器安装操作系统

配置完全相同的两台服务器,安装相同版本的 Linux 操作系统。留存系统光盘或者镜像文件。
我这里是 OEL5.7,系统目录大小均一致。对应 OEL5.7 的系统镜像文件放在服务器上,供后面配置本地 yum 使用。

1.2 Oracle 安装介质

Oracle 10.2.0.1 版本的 clusterware 和 db,以及 10.2.0.5 的升级包。

 

-rwxr-xr-x 1 root root 302M 12 月 24 13:07 10201_clusterware_linux_x86_64.cpio.gz
-rwxr-xr-x 1 root root 724M 12 月 24 13:08 10201_database_linux_x86_64.cpio.gz
-rwxr-xr-x 1 root root 1.2G 12 月 24 13:10 p8202632_10205_Linux-x86-64.zip

这个用 MOS 账号自己去 support.oracle.com 下载,然后只需要上传到节点 1 即可。

1.3 共享存储规划

从存储中划分出两台主机可以同时看到的共享 LUN。
我这里自己的实验环境是使用 openfiler 模拟出共享 LUN:
5 个 100M 大小 LUN;用于 OCR,votedisk;
3 个 10G 大小 LUN;用于 DATA;
2 个 5G 大小 LUN;用于 FRA。

openfiler 使用可参考:Openfiler 配置 RAC 共享存储

1.4 网络规划分配

公有网络 以及 私有网络。
公有网络:物理网卡 eth0(public IP,VIP),需要 4 个 IP 地址。
私有网络:物理网卡 eth1(private IP),需要 2 个内部 IP 地址。

实际生产环境一般服务器都至少有 4 块网卡。建议是两两 bonding 后分别作为公有网络和私有网络。

2. 主机配置

2.1 使用 yum 安装 oracle-validated 包来简化主机配置的部分工作

由于系统环境是 OEL5.7,可以简化依赖包安装、内核参数调整,用户和组创建等工作,可参考:OEL 上使用 yum install oracle-validated 简化主机配置工作

2.2 共享存储配置:

我这里 openfiler 所在主机的 IP 地址为 192.168.1.12。归划的 10 块 LUN 全部映射到 iqn.2006-01.com.openfiler:rac10g 上。

[root@oradb28 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.12
192.168.1.12:3260,1 iqn.2006-01.com.openfiler:rac10g

#手工登录 iscsi 目标
iscsiadm -m node -T iqn.2006-01.com.openfiler:rac10g -p 192.168.1.12 -l

#配置自动登录
iscsiadm -m node -T iqn.2006-01.com.openfiler:rac10g -p 192.168.1.12 --op update -n node.startup -v automatic

#重启 iscsi 服务
service iscsi stop
service iscsi start

注意:安装 10g RAC,要确保共享设备上划分的 LUN 要在所有节点上被识别为相同设备名称。

[root@oradb27 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,   0 Jan  2 22:40 /dev/sda
brw-r----- 1 root disk 8,  16 Jan  2 22:40 /dev/sdb
brw-r----- 1 root disk 8,  32 Jan  2 22:40 /dev/sdc
brw-r----- 1 root disk 8,  48 Jan  2 22:40 /dev/sdd
brw-r----- 1 root disk 8,  64 Jan  2 22:40 /dev/sde
brw-r----- 1 root disk 8,  80 Jan  2 22:40 /dev/sdf
brw-r----- 1 root disk 8,  96 Jan  2 22:40 /dev/sdg
brw-r----- 1 root disk 8, 112 Jan  2 22:40 /dev/sdh
brw-r----- 1 root disk 8, 128 Jan  2 22:40 /dev/sdi
brw-r----- 1 root disk 8, 144 Jan  2 22:40 /dev/sdj

[root@oradb28 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,   0 Jan  2 22:41 /dev/sda
brw-r----- 1 root disk 8,  16 Jan  2 22:41 /dev/sdb
brw-r----- 1 root disk 8,  32 Jan  2 22:41 /dev/sdc
brw-r----- 1 root disk 8,  48 Jan  2 22:41 /dev/sdd
brw-r----- 1 root disk 8,  64 Jan  2 22:41 /dev/sde
brw-r----- 1 root disk 8,  80 Jan  2 22:41 /dev/sdf
brw-r----- 1 root disk 8,  96 Jan  2 22:41 /dev/sdg
brw-r----- 1 root disk 8, 112 Jan  2 22:41 /dev/sdh
brw-r----- 1 root disk 8, 128 Jan  2 22:41 /dev/sdi
brw-r----- 1 root disk 8, 144 Jan  2 22:41 /dev/sdj

其中 sda,sdb,sdc,sdd,sde 是 100M 大小的 LUN,我们分别将这 5 个 LUN 各分成一个区(我实验中发现如果不分区直接绑成裸设备,在安装 clusterware 后执行 root.sh 时会报错:“Failed to upgrade Oracle Cluster Registry configuration”,分区后绑定分区成裸设备,发现可以正常执行通过)

[root@oradb27 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,  0 Jan  3 09:36 /dev/sda
brw-r----- 1 root disk 8,  1 Jan  3 09:36 /dev/sda1
brw-r----- 1 root disk 8, 16 Jan  3 09:36 /dev/sdb
brw-r----- 1 root disk 8, 17 Jan  3 09:36 /dev/sdb1
brw-r----- 1 root disk 8, 32 Jan  3 09:36 /dev/sdc
brw-r----- 1 root disk 8, 33 Jan  3 09:36 /dev/sdc1
brw-r----- 1 root disk 8, 48 Jan  3 09:36 /dev/sdd
brw-r----- 1 root disk 8, 49 Jan  3 09:36 /dev/sdd1
brw-r----- 1 root disk 8, 64 Jan  3 09:36 /dev/sde
brw-r----- 1 root disk 8, 65 Jan  3 09:36 /dev/sde1

[root@oradb28 crshome_1]# ls -lh /dev/sd*
brw-r----- 1 root disk 8,  0 Jan  3 09:36 /dev/sda
brw-r----- 1 root disk 8,  1 Jan  3 09:36 /dev/sda1
brw-r----- 1 root disk 8, 16 Jan  3 09:36 /dev/sdb
brw-r----- 1 root disk 8, 17 Jan  3 09:36 /dev/sdb1
brw-r----- 1 root disk 8, 32 Jan  3 09:36 /dev/sdc
brw-r----- 1 root disk 8, 33 Jan  3 09:36 /dev/sdc1
brw-r----- 1 root disk 8, 48 Jan  3 09:36 /dev/sdd
brw-r----- 1 root disk 8, 49 Jan  3 09:36 /dev/sdd1
brw-r----- 1 root disk 8, 64 Jan  3 09:36 /dev/sde
brw-r----- 1 root disk 8, 65 Jan  3 09:36 /dev/sde1

1)使用 udev 绑定 raw devices,供 ocr 和 voting disk 使用

编辑配置文件并追加以下内容:

# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sda1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="raw*", OWNER=="oracle", GROUP=="oinstall", MODE=="0660"

启动 start_udev:

[root@oradb27 rules.d]# start_udev
Starting udev:                                             [OK]
[root@oradb27 rules.d]# ls -l /dev/raw*
crw-rw---- 1 oracle oinstall 162, 0 Jan  2 22:37 /dev/rawctl

/dev/raw:
total 0
crw-rw---- 1 oracle oinstall 162, 1 Jan  2 23:11 raw1
crw-rw---- 1 oracle oinstall 162, 2 Jan  2 23:11 raw2
crw-rw---- 1 oracle oinstall 162, 3 Jan  2 23:11 raw3
crw-rw---- 1 oracle oinstall 162, 4 Jan  2 23:11 raw4
crw-rw---- 1 oracle oinstall 162, 5 Jan  2 23:11 raw5
[root@oradb27 rules.d]# 

配置文件 60-raw.rules 传到节点 2:

[root@oradb27 rules.d]# scp /etc/udev/rules.d/60-raw.rules oradb28:/etc/udev/rules.d/

在节点 2 启动 start_udev。

注意:如果安装中发现 raw 曾被使用过,可能需要 dd 清除头部信息;

dd if=/dev/zero of=/dev/raw/raw1 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw2 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw3 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw4 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw5 bs=1048576 count=10

2)使用 udev 绑定 asm devices,供 data 磁盘组和 fra 磁盘组使用

for i in f g h i j;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\""
done

操作过程如下:

[root@oradb27 rules.d]# for i in f g h i j;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c455279366c36366a2d5a4243752d58394a33", NAME="asm-diskf", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c45525453586652542d67786f682d594c4a66", NAME="asm-diskg", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c455232586c3151572d62504e412d3343547a", NAME="asm-diskh", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c45527061334151682d4666656d2d5a6a4c67", NAME="asm-diski", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c4552495649757a352d675251532d47744353", NAME="asm-diskj", OWNER="oracle", GROUP="oinstall", MODE="0660"
[root@oradb27 rules.d]# 

vi 
[root@oradb27 rules.d]# vi 99-oracle-asmdevices.rules

[root@oradb27 rules.d]# start_udev
Starting udev:                                             [OK]
[root@oradb27 rules.d]# ls -lh /dev/asm*
brw-rw---- 1 oracle oinstall 8,  80 Jan  2 23:18 /dev/asm-diskf
brw-rw---- 1 oracle oinstall 8,  96 Jan  2 23:18 /dev/asm-diskg
brw-rw---- 1 oracle oinstall 8, 112 Jan  2 23:18 /dev/asm-diskh
brw-rw---- 1 oracle oinstall 8, 128 Jan  2 23:18 /dev/asm-diski
brw-rw---- 1 oracle oinstall 8, 144 Jan  2 23:18 /dev/asm-diskj

# 拷贝配置文件 99-oracle-asmdevices.rules 到节点 2,启动 start_udev
[root@oradb27 rules.d]# scp 99-oracle-asmdevices.rules oradb28:/etc/udev/rules.d/99-oracle-asmdevices.rules

[root@oradb28 ~]# start_udev
Starting udev:                                             [OK]
[root@oradb28 ~]# ls -l /dev/asm*
brw-rw---- 1 oracle oinstall 8,  80 Jan  2 23:20 /dev/asm-diskf
brw-rw---- 1 oracle oinstall 8,  96 Jan  2 23:20 /dev/asm-diskg
brw-rw---- 1 oracle oinstall 8, 112 Jan  2 23:20 /dev/asm-diskh
brw-rw---- 1 oracle oinstall 8, 128 Jan  2 23:20 /dev/asm-diski
brw-rw---- 1 oracle oinstall 8, 144 Jan  2 23:20 /dev/asm-diskj

2.3 配置 /etc/hosts

按照规划配置节点 1 的 /etc/hosts 内容

#public ip
192.168.1.27  oradb27
192.168.1.28  oradb28
#private ip
10.10.10.27   oradb27-priv
10.10.10.28   oradb28-priv
#virtual ip
192.168.1.57  oradb27-vip
192.168.1.58  oradb28-vip

然后 scp 拷贝 /etc/hosts 配置文件到节点 2:

scp /etc/hosts oradb28:/etc/

2.4 配置 Oracle 用户等价性

# 所有节点执行:
ssh-keygen -q -t rsa  -N "" -f  ~/.ssh/id_rsa

# 节点 1 执行:
ssh 192.168.1.27 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh 192.168.1.28 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

chmod 600 ~/.ssh/authorized_keys

scp ~/.ssh/authorized_keys  192.168.1.28:~/.ssh/

# 所有节点执行验证 ssh 等价性:
ssh 192.168.1.27 date;ssh 192.168.1.28 date;
ssh oradb27 date;ssh oradb28 date;
ssh oradb27-priv date;ssh oradb28-priv date;

对配置用户 ssh 互信步骤如有疑问可以参考:记录一则 Linux SSH 的互信配置过程

2.5 创建软件目录

mkdir -p /u01/app/oracle/product/10.2.0.5/dbhome_1
mkdir -p /u01/app/oracle/product/10.2.0.5/crshome_1
chown -R oracle:oinstall /u01/app

2.6 配置用户环境变量

节点 1: vi /home/oracle/.bash_profile

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.5/dbhome_1
export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0.5/crshome_1
export ORACLE_SID=jyrac1
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
alias sql="sqlplus \"/as sysdba\""

节点 2:vi /home/oracle/.bash_profile

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.5/dbhome_1
export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0.5/crshome_1
export ORACLE_SID=jyrac2
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
alias sql="sqlplus \"/as sysdba\""

2.7 关闭各节点主机防火墙和 SELinux

各节点检查、关闭防火墙和 SE Linux:

service iptables status
service iptables stop
chkconfig iptables off

getenforce
setenforce 0
vi /etc/selinux/config
 修改:Enforcing -> disabled

2.8 各节点系统时间校对

service ntpd stop
date 
#如果时间有问题,就按下面的语法进行设定
date 072310472015 // 设定日期为2015-07-23 10:47:00
hwclock -w
hwclock -r

至此,主机配置的相关准备工作已经完成。

更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2017-01/139158p2.htm

Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part3:db 安装和升级
环境:OEL 5.7 + Oracle 10.2.0.5 RAC

5. 安装 Database 软件

  • 5.1 解压安装介质
  • 5.2 开始安装 db 软件
  • 5.3 root 用户执行脚本

6. 升级 Database 软件

  • 6.1 升级 db 软件
  • 6.2 root 用户执行脚本

7. 创建数据库

  • 7.1 创建监听
  • 7.2 创建 ASM
  • 7.3 创建数据库

Linux 平台 Oracle 10gR2 RAC 安装指导:
Part1:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part1:准备工作
Part2:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part2:clusterware 安装和升级
Part3:Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装 Part3:db 安装和升级

5. 安装 Database 软件

5.1 解压安装介质

[oracle@oradb27 ~]$ cd /u01/media/
[oracle@oradb27 media]$ gunzip 10201_database_linux_x86_64.cpio.gz 
[oracle@oradb27 media]$ cpio -idmv < 10201_database_linux_x86_64.cpio 
[oracle@oradb27 media]$ cd database/
[oracle@oradb27 database]$ ls
doc  install  response  runInstaller  stage  welcome.html
[oracle@oradb27 database]$ cd install/
[oracle@oradb27 install]$ ls
addLangs.sh  addNode.sh  images  lsnodes  oneclick.properties  oraparam.ini  oraparamsilent.ini  resource  response  unzip
[oracle@oradb27 install]$ vi oraparam.ini 
# 找到“[Certified Versions]”这里,行尾添加 RedHat-5,保存退出
[Certified Versions]
Linux=redhat-3,SUSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5

5.2 开始安装 db 软件

使用 xquartz 连接到节点 1,开始调用图形安装 db 软件。

[oracle@oradb27 database]$ pwd
/u01/media/database
[oracle@oradb27 database]$ ls
doc  install  response  runInstaller  stage  welcome.html
[oracle@oradb27 database]$ ./runInstaller 

5.3 root 用户执行脚本

root 用户按照提示执行脚本 @all nodes

/u01/app/oracle/product/10.2.0.5/dbhome_1/root.sh

然后继续下一步,提示完成安装:

The following J2EE Applications have been deployed and are accessible at the URLs listed below.

iSQL*Plus URL:
http://oradb27:5561/isqlplus

iSQL*Plus DBA URL:
http://oradb27:5561/isqlplus/dba

到这里,就完成 db 10.2.0.1 的软件安装。

6. 升级 Database 软件

6.1 升级 db 软件

[oracle@oradb27 Disk1]$ pwd
/u01/media/Disk1
[oracle@oradb27 Disk1]$ ls
install  patch_note.htm  response  runInstaller  stage
[oracle@oradb27 Disk1]$ ./runInstaller 

6.2 root 用户执行脚本 @all nodes

root 用户执行脚本 @all nodes

/u01/app/oracle/product/10.2.0.5/dbhome_1/root.sh

执行完成后,返回图形界面完成安装:

The iSQL*Plus URL is:
http://oradb27:5560/isqlplus

The iSQL*Plus DBA URL is:
http://oradb27:5560/isqlplus/dba

到这里,成功升级 db 软件到 10.2.0.5 版本。

7. 创建数据库

7.1 创建监听

netca 创建监听。

成功后查看集群资源,已经多了监听资源:

[oracle@oradb27 Disk1]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....27.lsnr application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....28.lsnr application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb27 Disk1]$ 

7.2 创建 ASM

dbca 配置 ASM 磁盘组,添加 DATA,FRA 磁盘组。

配置 ASM 磁盘组的过程中会创建 ASM 实例,成功后,查看集群资源,已经多了 asm 实例资源:

[oracle@oradb27 Disk1]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....27.lsnr application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....28.lsnr application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28  

7.3 创建数据库

dbca 创建数据库。

创建完成后,查看集群资源,多了数据库实例资源和数据库资源:

[oracle@oradb27 Disk1]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.jyrac.db   application    0/0    0/1    ONLINE    ONLINE    oradb28     
ora....c1.inst application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....c2.inst application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....27.lsnr application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....28.lsnr application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb27 Disk1]$ 

验证当前数据库版本和可用性:

[oracle@oradb27 Disk1]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Tue Jan 3 20:59:21 2017

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> select open_mode from v$database;

OPEN_MODE
----------
READ WRITE

SQL> 
SQL> col comp_name for a45
SQL> set linesize 120
SQL> select comp_name, status, version from dba_registry;

COMP_NAME                                     STATUS                 VERSION
--------------------------------------------- ---------------------- ------------------------------
Spatial                                       VALID                  10.2.0.5.0
Oracle interMedia                             VALID                  10.2.0.5.0
OLAP Catalog                                  VALID                  10.2.0.5.0
Oracle Enterprise Manager                     VALID                  10.2.0.5.0
Oracle XML Database                           VALID                  10.2.0.5.0
Oracle Text                                   VALID                  10.2.0.5.0
Oracle Expression Filter                      VALID                  10.2.0.5.0
Oracle Rules Manager                          VALID                  10.2.0.5.0
Oracle Workspace Manager                      VALID                  10.2.0.5.0
Oracle Data Mining                            VALID                  10.2.0.5.0
Oracle Database Catalog Views                 VALID                  10.2.0.5.0

COMP_NAME                                     STATUS                 VERSION
--------------------------------------------- ---------------------- ------------------------------
Oracle Database Packages and Types            VALID                  10.2.0.5.0
JServer JAVA Virtual Machine                  VALID                  10.2.0.5.0
Oracle XDK                                    VALID                  10.2.0.5.0
Oracle Database Java Packages                 VALID                  10.2.0.5.0
OLAP Analytic Workspace                       VALID                  10.2.0.5.0
Oracle OLAP API                               VALID                  10.2.0.5.0
Oracle Real Application Clusters              VALID                  10.2.0.5.0

18 rows selected.

至此,Linux 平台 Oracle 10gR2(10.2.0.5)RAC 安装系列已全部完成。

更多 Oracle 相关信息见Oracle 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=12

本文永久更新链接地址:http://www.linuxidc.com/Linux/2017-01/139158.htm

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-22发表,共计38584字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中