Hey all!
In this post we will show what we need to do in case we lost one of the Voting Disks. You can recreate DiskGroup GRID with your cluster online and with no unavailability. We will check on this post how can we do that.
When you installed GI, we did a check that we had 3 Voting Disks. With grid user:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 669c64a818e34ff5bf0841ba735ebd35 (ORCL:OBQ_GRID1_003) [OBQ_GRID1] 2. ONLINE 9a686c8abbc34fbfbfd2072de584aedb (ORCL:OBQ_GRID1_002) [OBQ_GRID1] 3. ONLINE ca3be597470a4fc9bf22dfbfe116130a (ORCL:OBQ_GRID1_001) [OBQ_GRID1] Located 3 voting disk(s).
After a few months, customer spoke with us that they have just only 2 Voting Disks:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 669c64a818e34ff5bf0841ba735ebd35 (ORCL:OBQ_GRID1_003) [OBQ_GRID1] 2. ONLINE ca3be597470a4fc9bf22dfbfe116130a (ORCL:OBQ_GRID1_001) [OBQ_GRID1] Located 2 voting disk(s).
This probably has ocurred because they got some fail in LUN where this Voting Disk resides (or some corruption). When this happens, this specific Voting Disk will change the status for OFFLINE. If cluster get restarted sometime (even with a maintenance window), the OFFLINE Voting Disk will be automatically removed.
Our goal is recreate DiskGroup GRID because we want to continue running our cluster with 3 Voting Disks.
We need to verify first what is the cluster status and make sure that cluster stack is online:
[grid@dbnode01 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
Then we can move Voting Disks to the DiskGroup DATA temporarily:
[grid@dbnode01 ~]$ crsctl replace votedisk +OBQ_DATA Successful addition of voting disk dde951bdb42b4fa6bf744a3d78e7c962. Successful deletion of voting disk 669c64a818e34ff5bf0841ba735ebd35. Successful deletion of voting disk ca3be597470a4fc9bf22dfbfe116130a. Successfully replaced voting disk group with +OBQ_DATA. CRS-4266: Voting file(s) successfully replaced
We need to check if Voting Disk has really moved:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE dde951bdb42b4fa6bf744a3d78e7c962 (ORCL:OBQ_DATA_001) [OBQ_DATA] Located 1 voting disk(s).
Now with root user, we need to add OCR to DiskGroup DATA and then remove it from DiskGroup GRID:
[root@dbnode01 ~]# ocrconfig -add +OBQ_DATA [root@dbnode01 ~]# ocrconfig -delete +OBQ_GRID1
Then we can verify if OCR is on DiskGroup DATA:
[root@dbnode01 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 2272 Available space (kbytes) : 407296 ID : 2139179401 Device/File Name : +OBQ_DATA Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
Back to the grid user, we create spill in the DiskGroup DATA:
[grid@dbnode01 ~]$ sqlplus / as sysasm SQL*Plus: Release 12.2.0.1.0 Production on Thu Jun 24 19:08:01 2019 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> create spfile='+OBQ_DATA' from memory; File created.
We need to check if spfile was adjusted in ASM to use new ones:
[grid@dbnode01 ~]$ asmcmd spget +OBQ_DATA/crsdrlq2db/ASMPARAMETERFILE/registry.253.1010862493
Now we need to check ASM configuration and we can see that password file is still on DiskGroup GRID:
[grid@dbnode01 ~]$ srvctl config asm ASM home: <CRS home> Password file: +OBQ_GRID1/orapwASM Backup of Password file: ASM listener: LISTENER,LISTENER_OBQ ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM,ASMNET2LSNR_ASM
Then we copy password file to DiskGroup DATA:
[grid@dbnode01 ~]$ asmcmd ASMCMD> pwcopy --asm +OBQ_GRID1/orapwASM +OBQ_DATA/orapwASM -f copying +OBQ_GRID1/orapwASM -> +OBQ_DATA/orapwASM ASMCMD>
We need to check ASM configuration again and now password file is on DiskGroup DATA:
[grid@dbnode01 ~]$ srvctl config asm ASM home: <CRS home> Password file: +OBQ_DATA/orapwASM Backup of Password file: ASM listener: LISTENER,LISTENER_OBQ ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM,ASMNET2LSNR_ASM
We need to check if DiskGroup is mounted in all cluster nodes:
[grid@dbnode01 ~]$ srvctl status diskgroup -diskgroup OBQ_GRID1 Disk Group OBQ_GRID1 is running on dbnode01,dbnode02
We need to check if we have some file opened in DiskGroup GRID. We can’t have any file opened in DiskGroup GRID if you we want to drop it. Also, it’s very important to make sure that we moved all files to a different DiskGroup:
[grid@dbnode01 ~]$ asmcmd ASMCMD> lsof DB_Name Instance_Name Path +ASM +ASM1 +OBQ_DATA.255.1010862395 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/accman.286.1010760859 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/accman_idx.287.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/archive.288.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/archive_idx.289.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/bsms.290.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/bsms_idx.291.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/configuration.292.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/configuration_idx.293.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_data_01.294.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_glob_ttoll_id_2015.295.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_glob_ttoll_id_2016.296.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_index_01.297.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2015_12.298.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_01.299.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_02.300.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_03.301.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_04.302.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_05.303.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_06.304.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_07.305.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_08.306.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_09.307.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_10.308.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_11.309.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_12.310.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/fuse_sw.311.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/oboqueues.312.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/operation.313.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/operation.322.1010766479 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/operation_idx.314.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/records.315.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/report.316.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/report_idx.317.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/sysaux.283.1010760845 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.281.1010760843 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.321.1010766449 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.324.1010766651 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.325.1010766721 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.326.1010768715 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/undotbs1.282.1010760845 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/undotbs2.318.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/users.284.1010760851 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/usertbs.285.1010760853 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/TEMPFILE/tempts1.323.1010766517 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/CONTROLFILE/current.256.1010760835 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_1.258.1010760883 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_10.267.1010761007 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_11.268.1010761021 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_12.269.1010761037 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_13.270.1010761053 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_14.271.1010761065 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_2.259.1010760897 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_3.260.1010760909 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_4.261.1010760921 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_5.262.1010760935 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_6.263.1010760953 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_7.264.1010760971 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_8.265.1010760983 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_9.266.1010760995 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/CONTROLFILE/current.256.1010760835 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_1.258.1010760889 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_10.267.1010761015 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_11.268.1010761029 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_12.269.1010761045 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_13.270.1010761059 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_14.271.1010761073 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_2.259.1010760903 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_3.260.1010760915 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_4.261.1010760927 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_5.262.1010760941 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_6.263.1010760963 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_7.264.1010760977 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_8.265.1010760989 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_9.266.1010761001 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/8B23E8D5953F1741E053A138A8C05D72/TEMPFILE/temp.327.1010857475 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/8B24053869A92CB2E053A138A8C08983/TEMPFILE/temp.328.1010857477 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/CONTROLFILE/ctrl.20190613173716 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/PDB$SEED/sysaux.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/PDB$SEED/system.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/PDB$SEED/undotbs1.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysaux.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/syscalogdata.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysgridhomedata.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdata.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdatachafix.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdatadb.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdataq.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/system.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/undotbs1.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/users.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/sysaux.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/system.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/undotbs1.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/group_6.279.1010857441 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo1.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo2.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo3.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo4.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo5.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/TEMPFILE/temp.280.1010857473
We can conclude according to the above list that we don’t have any file opened in DiskGroup GRID. We can also conclude that we have a database (OBQ_RAC) running. And this is good, because we can make sure that our procedure will not affect system availability.
We need to dismount DiskGroup, but ASM instance still “see” the old spfile (in DiskGroup GRID), that’s ok, but we need to use keyword “force” to dismount DiskGroup:
SQL> alter diskgroup obq_grid1 dismount force; Diskgroup altered.
Diskgroup has dismounted on node 1, but is still mounted on node 2 as we can see:
SQL> select inst_id,name,state from gv$asm_diskgroup where name='OBQ_GRID1'; INST_ID NAME STATE ---------- ------------------------------ ----------- 1 OBQ_GRID1 DISMOUNTED 2 OBQ_GRID1 MOUNTED
So you need to execute the same dismount command on node 2 (connected with grid user and using sysasm role – sqlplus / as sysasm):
SQL> alter diskgroup obq_grid1 dismount force; Diskgroup altered.
SQL> select inst_id,name,state from gv$asm_diskgroup where name='OBQ_GRID1'; INST_ID NAME STATE ---------- ------------------------------ ----------- 1 OBQ_GRID1 DISMOUNTED 2 OBQ_GRID1 DISMOUNTED
With DiskGroup dismounted in all instances, we’ll recreate it. We don’t need to use force in disk 2, since this disk is removed form DiskGroup when it failed:
SQL> create diskgroup OBQ_GRID1 normal redundancy 2 disk 3 'ORCL:OBQ_GRID1_001' force, 4 'ORCL:OBQ_GRID1_002', 5 'ORCL:OBQ_GRID1_003' force 6 attribute 7 'compatible.asm' = '12.2.0.1.0', 8 'compatible.advm' = '12.2.0.1.0', 9 'compatible.rdbms' = '12.1.0.0.0'; Diskgroup created.
Now with DiskGroup created, we need to mount it in all nodes:
[grid@dbnode01 ~]$ srvctl start diskgroup -diskgroup OBQ_GRID1
We need to check if DiskGroup has mounted in all nodes:
[grid@dbnode01 ~]$ srvctl status diskgroup -diskgroup OBQ_GRID1 Disk Group OBQ_GRID1 is running on dbnode01,dbnode02
We can check if we have any file inside DiskGroup GRID:
[grid@dbnode01 ~]$ asmcmd ASMCMD> find +obq_grid1 * ASMCMD>
Well, we are ready to move back the files to DiskGroup GRID!
We need to replace Voting Disk back to DiskGroup GRID:
[grid@dbnode01 ~]$ crsctl replace votedisk +OBQ_GRID1 Successful addition of voting disk 8d7ca7a541e94f4cbf952f84af6c7d3e. Successful addition of voting disk 7a45df69e6694fb0bf6737ae965b14cf. Successful addition of voting disk 982b0a301f044f28bfbfffae3b8aa40e. Successful deletion of voting disk dde951bdb42b4fa6bf744a3d78e7c962. Successfully replaced voting disk group with +OBQ_GRID1. CRS-4266: Voting file(s) successfully replaced
Then we can check if Voting Disks were moved to the new DiskGroup:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 8d7ca7a541e94f4cbf952f84af6c7d3e (ORCL:OBQ_GRID1_001) [OBQ_GRID1] 2. ONLINE 7a45df69e6694fb0bf6737ae965b14cf (ORCL:OBQ_GRID1_002) [OBQ_GRID1] 3. ONLINE 982b0a301f044f28bfbfffae3b8aa40e (ORCL:OBQ_GRID1_003) [OBQ_GRID1] Located 3 voting disk(s).
As root user, we need to add OCR to the DiskGroup GRID and them remove it from DiskGroup DATA:
[root@dbnode01 ~]# ocrconfig -add +OBQ_GRID1 [root@dbnode01 ~]# ocrconfig -delete +OBQ_DATA
Then we can check if OCR is on DiskGroup GRID:
[root@dbnode01 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 2268 Available space (kbytes) : 407300 ID : 2139179401 Device/File Name : +OBQ_GRID1 Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
We can create spfile in DiskGroup GRID:
[grid@dbnode01 ~]$ sqlplus / as sysasm SQL*Plus: Release 12.2.0.1.0 Production on Thu Jun 24 19:30:03 2019 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> create spfile='+OBQ_GRID1' from memory; File created.
And then check if ASM will consider the new spfile:
[grid@dbnode01 ~]$ asmcmd spget +OBQ_GRID1/crsdrlq2db/ASMPARAMETERFILE/registry.253.1010863395
We can copy the password file from DiskGroup DATA to DiskGroup GRID:
[grid@dbnode01 ~]$ asmcmd ASMCMD> pwcopy --asm +OBQ_DATA/orapwASM +OBQ_GRID1/orapwASM -f copying +OBQ_DATA/orapwASM -> +OBQ_GRID1/orapwASM
Then we can check if ASM will use the new password file:
[grid@dbnode01 ~]$ srvctl config asm ASM home: <CRS home> Password file: +OBQ_GRID1/orapwASM Backup of Password file: ASM listener: LISTENER,LISTENER_OBQ ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM,ASMNET2LSNR_ASM
Now we can verify what are the files inside DiskGroup GRID:
[grid@dbnode01 ~]$ asmcmd ASMCMD> find +obq_grid1 * +obq_grid1/ASM/ +obq_grid1/ASM/PASSWORD/ +obq_grid1/ASM/PASSWORD/pwdasm.256.1010863567 +obq_grid1/crsdrlq2db/ +obq_grid1/crsdrlq2db/ASMPARAMETERFILE/ +obq_grid1/crsdrlq2db/ASMPARAMETERFILE/REGISTRY.253.1010863395 +obq_grid1/crsdrlq2db/OCRFILE/ +obq_grid1/crsdrlq2db/OCRFILE/REGISTRY.255.1010863507 +obq_grid1/orapwasm
And also we can check what are the files opened in DiskGroup GRID:
[grid@dbnode01 ~]$ asmcmd ASMCMD> lsof -G OBQ_GRID1 DB_Name Instance_Name Path +ASM +ASM1 +OBQ_GRID1.255.1010863507
All steps listed above were executed with cluster and database online in all nodes, with no unavailability. Let’s check the cluster stack status:
[grid@dbnode01 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
Well, everything looks good!
Peace!
Vinicius
Related posts
Disclaimer
My postings reflect my own views and do not necessarily represent the views of my employer, Accenture.