Olá pessoal!
O post de hoje tem como objetivo mostrar o que fazer caso você perca um dos Voting Disks. Você pode recriar o DiskGroup do GRID com todos os nós online e sem qualquer tipo de indisponibilidade. Esse post mostra quais são os passos necessários.
Quando a instalação do GI foi realizada, pudemos verificar que tínhamos 3 Voting Disks:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 669c64a818e34ff5bf0841ba735ebd35 (ORCL:OBQ_GRID1_003) [OBQ_GRID1] 2. ONLINE 9a686c8abbc34fbfbfd2072de584aedb (ORCL:OBQ_GRID1_002) [OBQ_GRID1] 3. ONLINE ca3be597470a4fc9bf22dfbfe116130a (ORCL:OBQ_GRID1_001) [OBQ_GRID1] Located 3 voting disk(s).
Passado alguns meses, o cliente comentou que ele só enxergava 2 Voting Disks:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 669c64a818e34ff5bf0841ba735ebd35 (ORCL:OBQ_GRID1_003) [OBQ_GRID1] 2. ONLINE ca3be597470a4fc9bf22dfbfe116130a (ORCL:OBQ_GRID1_001) [OBQ_GRID1] Located 2 voting disk(s).
Isso provavelmente aconteceu pois ele tava uma falha na LUN onde estava esse Voting Disk (ou alguma corrupção também). Quando isso acontece, o Voting Disk fica OFFLINE. Se o cluster for reiniciado por qualquer motivo, mesmo que de maneira planejada, o Voting Disk OFFLINE será automaticamente excluído.
O objetivo é recriar o DiskGroup GRID para garantir que continuemos com 3 Voting Disks.
Verificamos primeiramente qual é o status do cluster e podemos verificar que o cluster está online:
[grid@dbnode01 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
Movemos o Voting Disks temporariamente para o DiskGroup DATA:
[grid@dbnode01 ~]$ crsctl replace votedisk +OBQ_DATA Successful addition of voting disk dde951bdb42b4fa6bf744a3d78e7c962. Successful deletion of voting disk 669c64a818e34ff5bf0841ba735ebd35. Successful deletion of voting disk ca3be597470a4fc9bf22dfbfe116130a. Successfully replaced voting disk group with +OBQ_DATA. CRS-4266: Voting file(s) successfully replaced
Verificamos se realmente foi movido:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE dde951bdb42b4fa6bf744a3d78e7c962 (ORCL:OBQ_DATA_001) [OBQ_DATA] Located 1 voting disk(s).
Como root, adicionamos o OCR no DiskGroup DATA e também removemos o OCR do DiskGroup GRID:
[root@dbnode01 ~]# ocrconfig -add +OBQ_DATA [root@dbnode01 ~]# ocrconfig -delete +OBQ_GRID1
Verificamos se realmente o OCR agora só está armazenado no DATA:
[root@dbnode01 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 2272 Available space (kbytes) : 407296 ID : 2139179401 Device/File Name : +OBQ_DATA Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
Criamos o spfile no DiskGroup DATA:
[grid@dbnode01 ~]$ sqlplus / as sysasm SQL*Plus: Release 12.2.0.1.0 Production on Thu Jun 24 19:08:01 2019 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> create spfile='+OBQ_DATA' from memory; File created.
Verificamos se o ASM considerará o novo spfile no próximo restart:
[grid@dbnode01 ~]$ asmcmd spget +OBQ_DATA/crsdrlq2db/ASMPARAMETERFILE/registry.253.1010862493
Verificamos as configurações do ASM e podemos verificar que o password file está no DiskGroup GRID:
[grid@dbnode01 ~]$ srvctl config asm ASM home: <CRS home> Password file: +OBQ_GRID1/orapwASM Backup of Password file: ASM listener: LISTENER,LISTENER_OBQ ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM,ASMNET2LSNR_ASM
Copiamos o password file para o DiskGroup DATA:
[grid@dbnode01 ~]$ asmcmd ASMCMD> pwcopy --asm +OBQ_GRID1/orapwASM +OBQ_DATA/orapwASM -f copying +OBQ_GRID1/orapwASM -> +OBQ_DATA/orapwASM ASMCMD>
Verificamos novamente se o ASM considera o novo password file:
[grid@dbnode01 ~]$ srvctl config asm ASM home: <CRS home> Password file: +OBQ_DATA/orapwASM Backup of Password file: ASM listener: LISTENER,LISTENER_OBQ ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM,ASMNET2LSNR_ASM
Verificamos que o DiskGroup está montado em todos os nós do cluster:
[grid@dbnode01 ~]$ srvctl status diskgroup -diskgroup OBQ_GRID1 Disk Group OBQ_GRID1 is running on dbnode01,dbnode02
Verificamos se há ainda algum arquivo no DiskGroup GRID:
[grid@dbnode01 ~]$ asmcmd ASMCMD> lsof DB_Name Instance_Name Path +ASM +ASM1 +OBQ_DATA.255.1010862395 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/accman.286.1010760859 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/accman_idx.287.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/archive.288.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/archive_idx.289.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/bsms.290.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/bsms_idx.291.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/configuration.292.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/configuration_idx.293.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_data_01.294.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_glob_ttoll_id_2015.295.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_glob_ttoll_id_2016.296.1010760861 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_index_01.297.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2015_12.298.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_01.299.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_02.300.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_03.301.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_04.302.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_05.303.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_06.304.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_07.305.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_08.306.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_09.307.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_10.308.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_11.309.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/flow_qual_ttoll_trans_2016_12.310.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/fuse_sw.311.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/oboqueues.312.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/operation.313.1010760863 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/operation.322.1010766479 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/operation_idx.314.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/records.315.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/report.316.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/report_idx.317.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/sysaux.283.1010760845 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.281.1010760843 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.321.1010766449 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.324.1010766651 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.325.1010766721 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/system.326.1010768715 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/undotbs1.282.1010760845 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/undotbs2.318.1010760865 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/users.284.1010760851 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/DATAFILE/usertbs.285.1010760853 OBQ_RAC OBQ1 +OBQ_DATA/OBQ_RAC/TEMPFILE/tempts1.323.1010766517 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/CONTROLFILE/current.256.1010760835 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_1.258.1010760883 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_10.267.1010761007 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_11.268.1010761021 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_12.269.1010761037 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_13.270.1010761053 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_14.271.1010761065 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_2.259.1010760897 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_3.260.1010760909 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_4.261.1010760921 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_5.262.1010760935 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_6.263.1010760953 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_7.264.1010760971 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_8.265.1010760983 OBQ_RAC OBQ1 +OBQ_REDO1/OBQ_RAC/ONLINELOG/group_9.266.1010760995 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/CONTROLFILE/current.256.1010760835 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_1.258.1010760889 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_10.267.1010761015 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_11.268.1010761029 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_12.269.1010761045 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_13.270.1010761059 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_14.271.1010761073 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_2.259.1010760903 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_3.260.1010760915 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_4.261.1010760927 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_5.262.1010760941 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_6.263.1010760963 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_7.264.1010760977 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_8.265.1010760989 OBQ_RAC OBQ1 +OBQ_REDO2/OBQ_RAC/ONLINELOG/group_9.266.1010761001 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/8B23E8D5953F1741E053A138A8C05D72/TEMPFILE/temp.327.1010857475 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/8B24053869A92CB2E053A138A8C08983/TEMPFILE/temp.328.1010857477 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/CONTROLFILE/ctrl.20190613173716 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/PDB$SEED/sysaux.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/PDB$SEED/system.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/PDB$SEED/undotbs1.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysaux.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/syscalogdata.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysgridhomedata.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdata.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdatachafix.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdatadb.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/sysmgmtdataq.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/system.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/undotbs1.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/crsdrlq2db/users.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/sysaux.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/system.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/DATAFILE/undotbs1.20190613173716.dbf _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/group_6.279.1010857441 _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo1.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo2.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo3.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo4.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/ONLINELOG/redo5.20190613173716.log _mgmtdb -MGMTDB +OBQ_DATA/_MGMTDB/TEMPFILE/temp.280.1010857473
Podemos observar que não há nenhum arquivo aberto no DiskGroup GRID.
Agora precisamos desmontar o DiskGroup GRID, como a instância ASM “enxerga” o spfile antigo (no DiskGroup GRID), precisamos utilizar a opção force:
SQL> alter diskgroup obq_grid1 dismount force; Diskgroup altered.
Verificamos se o DiskGroup foi desmontado e podemos constatar que o mesmo também está montado na instância 2:
SQL> select inst_id,name,state from gv$asm_diskgroup where name='OBQ_GRID1'; INST_ID NAME STATE ---------- ------------------------------ ----------- 1 OBQ_GRID1 DISMOUNTED 2 OBQ_GRID1 MOUNTED
Você deve repetir o comando de dismount na instância 2 também:
SQL> alter diskgroup obq_grid1 dismount force; Diskgroup altered.
SQL> select inst_id,name,state from gv$asm_diskgroup where name='OBQ_GRID1'; INST_ID NAME STATE ---------- ------------------------------ ----------- 1 OBQ_GRID1 DISMOUNTED 2 OBQ_GRID1 DISMOUNTED
Com o DiskGroup desmontado em ambas as instâncias, vamos recriá-lo. Observe que não uso a opção force para o disco 2, pois o mesmo já não pertencia mais ao DiskGroup:
SQL> create diskgroup OBQ_GRID1 normal redundancy 2 disk 3 'ORCL:OBQ_GRID1_001' force, 4 'ORCL:OBQ_GRID1_002', 5 'ORCL:OBQ_GRID1_003' force 6 attribute 7 'compatible.asm' = '12.2.0.1.0', 8 'compatible.advm' = '12.2.0.1.0', 9 'compatible.rdbms' = '12.1.0.0.0'; Diskgroup created.
Realizamos a montagem do DiskGroup em todos os nós:
[grid@dbnode01 ~]$ srvctl start diskgroup -diskgroup OBQ_GRID1
Verificamos se o DiskGroup está montado em todos os nós:
[grid@dbnode01 ~]$ srvctl status diskgroup -diskgroup OBQ_GRID1 Disk Group OBQ_GRID1 is running on dbnode01,dbnode02
Verificamos se há algum arquivo armazenado no DiskGroup:
[grid@dbnode01 ~]$ asmcmd ASMCMD> find +obq_grid1 * ASMCMD>
Estamos prontos para mover os arquivos de volta para o DiskGroup GRID!
Realizamos o replace do Voting para o DiskGroup GRID novamente:
[grid@dbnode01 ~]$ crsctl replace votedisk +OBQ_GRID1 Successful addition of voting disk 8d7ca7a541e94f4cbf952f84af6c7d3e. Successful addition of voting disk 7a45df69e6694fb0bf6737ae965b14cf. Successful addition of voting disk 982b0a301f044f28bfbfffae3b8aa40e. Successful deletion of voting disk dde951bdb42b4fa6bf744a3d78e7c962. Successfully replaced voting disk group with +OBQ_GRID1. CRS-4266: Voting file(s) successfully replaced
Verificamos se os Voting Disks estão no DiskGroup GRID:
[grid@dbnode01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 8d7ca7a541e94f4cbf952f84af6c7d3e (ORCL:OBQ_GRID1_001) [OBQ_GRID1] 2. ONLINE 7a45df69e6694fb0bf6737ae965b14cf (ORCL:OBQ_GRID1_002) [OBQ_GRID1] 3. ONLINE 982b0a301f044f28bfbfffae3b8aa40e (ORCL:OBQ_GRID1_003) [OBQ_GRID1] Located 3 voting disk(s).
Como root, adicionamos o OCR no DiskGroup GRID e removo do DiskGroup DATA:
[root@dbnode01 ~]# ocrconfig -add +OBQ_GRID1 [root@dbnode01 ~]# ocrconfig -delete +OBQ_DATA
Verificamos se o OCR está no DiskGroup GRID:
[root@dbnode01 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 2268 Available space (kbytes) : 407300 ID : 2139179401 Device/File Name : +OBQ_GRID1 Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
Criamos o spfile no DiskGroup GRID novamente:
[grid@dbnode01 ~]$ sqlplus / as sysasm SQL*Plus: Release 12.2.0.1.0 Production on Thu Jun 24 19:30:03 2019 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> create spfile='+OBQ_GRID1' from memory; File created.
Verificamos se o ASM já considerará o novo spfile:
[grid@dbnode01 ~]$ asmcmd spget +OBQ_GRID1/crsdrlq2db/ASMPARAMETERFILE/registry.253.1010863395
Copiamos o password file para o DiskGroup GRID novamente:
[grid@dbnode01 ~]$ asmcmd ASMCMD> pwcopy --asm +OBQ_DATA/orapwASM +OBQ_GRID1/orapwASM -f copying +OBQ_DATA/orapwASM -> +OBQ_GRID1/orapwASM
Verificamos se o ASM já considera o novo password file:
[grid@dbnode01 ~]$ srvctl config asm ASM home: <CRS home> Password file: +OBQ_GRID1/orapwASM Backup of Password file: ASM listener: LISTENER,LISTENER_OBQ ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM,ASMNET2LSNR_ASM
Verificamos quais são os arquivos armazenados no DiskGroup GRID:
[grid@dbnode01 ~]$ asmcmd ASMCMD> find +obq_grid1 * +obq_grid1/ASM/ +obq_grid1/ASM/PASSWORD/ +obq_grid1/ASM/PASSWORD/pwdasm.256.1010863567 +obq_grid1/crsdrlq2db/ +obq_grid1/crsdrlq2db/ASMPARAMETERFILE/ +obq_grid1/crsdrlq2db/ASMPARAMETERFILE/REGISTRY.253.1010863395 +obq_grid1/crsdrlq2db/OCRFILE/ +obq_grid1/crsdrlq2db/OCRFILE/REGISTRY.255.1010863507 +obq_grid1/orapwasm
Verificamos quais são os arquivos abertos no DiskGroup GRID:
[grid@dbnode01 ~]$ asmcmd ASMCMD> lsof -G OBQ_GRID1 DB_Name Instance_Name Path +ASM +ASM1 +OBQ_GRID1.255.1010863507
Todas as operações foram realizadas de maneira online, sem qualquer indisponibilidade ao cluster. Por fim, verificamos o status do cluster:
[grid@dbnode01 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
Bom, tudo resolvido!
Um abraço!
Vinicius
Related posts
2 Comments
Deixe um comentário Cancelar resposta
Esse site utiliza o Akismet para reduzir spam. Aprenda como seus dados de comentários são processados.
Disclaimer
Minhas postagens refletem minhas próprias opiniões e não representam necessariamente as opiniões do meu empregador, a Accenture.
Excelente e muito bem detalhado post, Vinicius!!
Parabéns pela contribuição à comunidade. Grande abraço
Valeu, Alex!
Abraço!