Pages

Monday, September 23, 2019

Step by step Grid Home Upgrade (12.1.0.2 to 12.2.0.1) on Exadata

В данной заметке приведена пошаговая инструкция, используемая для обновления Grid Home до версии 12.2.0.1 на Exadata.












В данном варианте:

  • у меня уже был заранее подготовленный новый (пропатченный) GI Home на другом сервере.
  • используется двухнодовый кластер на Exadata X3-2 Eighth Rack HC 3TB

==============================================
Copy/untar new GI Home:
==============================================

>>> grid@db01

echo $ORACLE_HOME
+++++
/u01/app/12.1.0.2/grid
+++++

{
cd /u01/app/
tar -zxvf /media/gridHome12.2.0.1.tar.gz --directory /u01/app/
cd /u01/app/12.2.0.1
rm -rf diag checkpoints crsdata srv-db cfgtoollogs admin
}

==============================================
Move old config files:
==============================================

>>> grid@db01

{
cd /u01/app/12.2.0.1/grid/dbs
mkdir old
mv *.* *+* old/
cd /u01/app/12.2.0.1/grid/network/admin
mkdir old
mv *.* old/
cd ~
}

==============================================
Run clone.pl (it will relink new home):
==============================================

>>> grid@db01

cat ASM1_12201.env
+++++
export ORACLE_HOME=/u01/app/12.2.0.1/grid
export ORACLE_SID=+ASM1
export PATH=$ORACLE_HOME/bin:$PATH
+++++

cd $ORACLE_HOME/clone/bin
$ORACLE_HOME/perl/bin/perl clone.pl ORACLE_BASE=/u01/app/grid2 ORACLE_HOME=/u01/app/12.2.0.1/grid OSDBA_GROUP=asmdba OSOPER_GROUP=asmadmin  ORACLE_HOME_NAME=OraGI12Home2 CRS=TRUE
+++++
Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/12.2.0.1/grid/root.sh
+++++

>>> root@ db01 / db02

/oracle/app/grid/product/12.2.0.1/grid/root.sh
+++++
Check /u01/app/12.2.0.1/grid/install/root_db01.bankspb.ru_2019-09-03_16-29-29-333843575.log for the output of root script
+++++
Check /u01/app/12.2.0.1/grid/install/root_db02.bankspb.ru_2019-09-03_16-34-16-718771060.log for the output of root script
+++++

==============================================
Check ASM spfile/orapw files:
==============================================

>>> grid@db01

source ASM1.env
asmcmd spget
+++++
+DBFS_DG/info-cluster/asmparameterfile/registry.253.811255231
+++++

asmcmd pwget --asm
+++++
Password file location has not been set for ASM instance
+++++

//
// Переносим password file на ASM
//

asmcmd pwcopy /u01/app/12.1.0.2/grid/dbs/orapw+ASM +DBFS_DG/info-cluster/orapwASM
+++++
copying /u01/app/12.1.0.2/grid/dbs/orapw+ASM -> +DBFS_DG/info-cluster/orapwASM
+++++

asmcmd pwset --asm +DBFS_DG/info-cluster/orapwASM
asmcmd pwget --asm
+++++
+DBFS_DG/info-cluster/orapwASM
+++++

==============================================
Check /etc/oratab
==============================================

Ref. to https://unknowndba.blogspot.com/2019/01/lost-entries-in-oratab-after-gi-122.html

>>> grid@ db01 / db02

vi /etc/oratab

(!!!) Нужно удалить коммент вида "# line added by Agent", чтобы после апгреда ничего не потерлось (!!!)

==============================================
Prepare "gridsetup.rsp"
==============================================

>>> grid@db01

source ASM*_12201.env
cd $ORACLE_HOME/install/response
egrep -v "^#|^$" gridsetup.rsp | head -10
+++++
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0
INVENTORY_LOCATION=
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/grid2
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmadmin
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.gpnp.scanName=
oracle.install.crs.config.gpnp.scanPort=
oracle.install.crs.config.ClusterConfiguration=
+++++

==============================================
Check system pre-requisites:
==============================================

################
limits.conf 
################

>>> root@ db01 / db02

grep stack /etc/security/limits.conf | grep soft
++++++
* soft stack 10240 <======= Руками добавил в конце файла, ранее не был прописан
+++++

>>> Re-login as "oracle" / "grid"

ulimit -Ss
+++++
10240
+++++

################################
At least 1500 huge pages free
################################

>>> root@ db01 / db02

grep -i huge /proc/meminfo
+++++
HugePages_Total:   94213
HugePages_Free:    30729
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
+++++
HugePages_Total:   94213
HugePages_Free:    30729
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
+++++

==============================================
Run the pre requisites:
==============================================

>>> grid@db01

source ASM*_12201.env
cd $ORACLE_HOME
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/12.2.0.1/grid -dest_version 12.2.0.1 -fixup -verbose

##############
ISSUE:
##############

Pre-check for cluster services setup was unsuccessful.
Checks did not pass for the following nodes:
        db02,db01

Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Group Membership: asmoper ...FAILED
db01: PRVG-10460 : User "grid" does not belong to group "asmoper" selected for privileges "OSOPER" on node "db01".

Verifying Soft Limit: maximum user processes ...FAILED
db02: PRVG-0447 : Proper soft limit for maximum user processes was not found on node "db02" [Expected >= "2047" ; Found = "1024"].
db01: PRVG-0447 : Proper soft limit for maximum user processes was not found on node "db01" [Expected >= "2047" ; Found = "1024"].

Verifying resolv.conf Integrity ...FAILED
PRVG-12861 : 'options timeout' entry does not exist in resolver configuration file "/etc/resolv.conf" on nodes "db01"

Verifying OLR Integrity ...FAILED
db02: PRVG-2033 : Permissions of file "/u01/app/12.1.0.2/grid/cdata/db02.olr" did not match the expected octal value on node "db02". [Expected = "0600" ; Found = "0775"]
db01: PRVG-2033 : Permissions of file "/u01/app/12.1.0.2/grid/cdata/db01.olr" did not match the expected octal value on node "db01". [Expected = "0600" ; Found = "0775"]

CVU operation performed:      stage -pre crsinst
Date:                         Sep 5, 2019 12:01:43 PM
CVU home:                     /u01/app/12.2.0.1/grid/
User:                         grid
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
--------------                ---------------     ----------------
Check failed.                 Failed on nodes     Reboot required?
--------------                ---------------     ----------------
Group Membership: asmoper     db01                no
Soft Limit: maximum user      db02,db01           no
processes


Execute "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" as root user on nodes "db01,db02" to perform the fix up operations manually

Press ENTER key to continue after execution of "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" has completed on nodes "db01,db02"
Fix: Group Membership: asmoper

  Node Name                             Status
  ------------------------------------  ------------------------
  db01                                  failed

ERROR:
db01: PRVG-9023 : Manual fix up command "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" was not issued by root user on node "db01"

Result:
"Group Membership: asmoper" could not be fixed on nodes "db01"

Fix: Soft Limit: maximum user processes

  Node Name                             Status
  ------------------------------------  ------------------------
  db02                                  failed
  db01                                  failed

ERROR:
db02: PRVG-9023 : Manual fix up command "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" was not issued by root user on node "db02"
db01: PRVG-9023 : Manual fix up command "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" was not issued by root user on node "db01"

Result:
"Soft Limit: maximum user processes" could not be fixed on nodes "db02,db01"

##############
FIX:
##############

>>> root@ db01 / db02

/tmp/CVU_12.2.0.1.0_grid/runfixup.sh

>>> grid@db01

source ASM*_12201.env
cd $ORACLE_HOME
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/12.2.0.1/grid -dest_version 12.2.0.1 -fixup -verbose

##############
ISSUE:
##############

Pre-check for cluster services setup was unsuccessful.
Checks did not pass for the following nodes:
        db02,db01

Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying resolv.conf Integrity ...FAILED
PRVG-12861 : 'options timeout' entry does not exist in resolver configuration file "/etc/resolv.conf" on nodes "db01"

Verifying OLR Integrity ...FAILED
db02: PRVG-2033 : Permissions of file "/u01/app/12.1.0.2/grid/cdata/db02.olr" did not match the expected octal value on node "db02". [Expected = "0600" ; Found = "0775"]
db01: PRVG-2033 : Permissions of file "/u01/app/12.1.0.2/grid/cdata/db01.olr" did not match the expected octal value on node "db01". [Expected = "0600" ; Found = "0775"]


CVU operation performed:      stage -pre crsinst
Date:                         Sep 5, 2019 12:10:26 PM
CVU home:                     /u01/app/12.2.0.1/grid/
User:                         grid

//
// Manual fixes:
//

#############################################
PRVG-12861 : 'options timeout' entry does not exist in resolver configuration file "/etc/resolv.conf" on nodes "db01"
#############################################

>>> root@db01

vi /etc/resolv.conf

Было опечатка видимо в файле:

Было:  option  timeout:4
Стало: options timeout:4

Было:  option  attempts:2
Стало: options attempts:2

Сделал так же как и на db02.

#############################################
PRVG-2033 : Permissions of file "/u01/app/12.1.0.2/grid/cdata/db02.olr"
#############################################

>>> root@db01

ls -ltr /u01/app/12.1.0.2/grid/cdata/db01.olr
+++++
-rwxrwxr-x 1 root oinstall 503484416 Sep  5 08:50 /u01/app/12.1.0.2/grid/cdata/db01.olr
+++++

chmod 0600 /u01/app/12.1.0.2/grid/cdata/db01.olr

ls -ltr /u01/app/12.1.0.2/grid/cdata/db01.olr
+++++
-rw------- 1 root oinstall 503484416 Sep  5 08:50 /u01/app/12.1.0.2/grid/cdata/db01.olr
+++++

>>> root@db02

ls -ltr /u01/app/12.1.0.2/grid/cdata/db02.olr
+++++
-rwxrwxr-x 1 root oinstall 503484416 Sep  5 11:16 /u01/app/12.1.0.2/grid/cdata/db02.olr
+++++

chmod 0600 /u01/app/12.1.0.2/grid/cdata/db02.olr

ls -ltr /u01/app/12.1.0.2/grid/cdata/db02.olr
+++++
-rw------- 1 root oinstall 503484416 Sep  5 11:16 /u01/app/12.1.0.2/grid/cdata/db02.olr
+++++

//
// Re-run cluvfy once again
//

>>> grid@db01

source ASM*_12201.env
cd $ORACLE_HOME
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/12.2.0.1/grid -dest_version 12.2.0.1 -fixup -verbose
+++++++++++++++++++++++++++++++++++++++++++++
Pre-check for cluster services setup was successful.

CVU operation performed:      stage -pre crsinst
Date:                         Sep 5, 2019 12:24:52 PM
CVU home:                     /u01/app/12.2.0.1/grid/
User:                         grid
+++++++++++++++++++++++++++++++++++++++++++++

==============================================
Upgrade to GI 12.2:
==============================================

#############################
ASM memory setting
#############################

>>> grid@db01

sqlplus / as sysasm

SQL> set lines 222
set pages 999
col name for a25
col value for a45
select NAME, VALUE from v$parameter where name in ('sga_max_size','sga_target','memory_target','memory_max_target','memory_max_target','use_large_pages');
++++++++++
NAME                      VALUE
------------------------- ---------------------------------------------
sga_max_size              2147483648
use_large_pages           TRUE
sga_target                2147483648
memory_target             0
memory_max_target         0
++++++++++

SQL> alter system set sga_max_size = 3G scope=spfile sid='*';
SQL> alter system set sga_target = 3G scope=spfile sid='*';
SQL> alter system set memory_target=0 sid='*' scope=spfile;
SQL> alter system set memory_max_target=0 sid='*' scope=spfile /* required workaround */;
SQL> alter system reset memory_max_target sid='*' scope=spfile;
SQL> alter system set use_large_pages=true sid='*' scope=spfile /* 11.2.0.2 and later(Linux only) */;

#############################
Reset miscount to default
#############################

//
// The miscount parameter is the maximum time, in seconds, that a network heartbeat can be missed before a node eviction occurs. 
// It needs to be reset to default before upgrading. It has to be done as the GI owner. 
//

>>> grid@db01

source ASM1.env
crsctl unset css misscount
+++++
CRS-4647: Configuration parameter misscount is reset to default operation value.
+++++

#######################################
Verify no active rebalance is running
#######################################

>>> grid@db01

sqlplus / as sysasm

SQL> select count(*) from gv$asm_operation;

  COUNT(*)
----------
         0

#######################################
Run "gridSetup.sh"
#######################################

>>> grid@db01

source ~/ASM1_12201.env
cd $ORACLE_HOME
./gridSetup.sh -silent -responseFile /u01/app/12.2.0.1/grid/install/response/gridsetup.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.crs.enableRemoteGIMR=false

+++++++++++++++++++++++++++++++++++++++++++++++++++++++
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-09-06_03-51-39PM/gridSetupActions2019-09-06_03-51-39PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-09-06_03-51-39PM/gridSetupActions2019-09-06_03-51-39PM.log. 
Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

As a root user, execute the following script(s):
        1. /u01/app/12.2.0.1/grid/rootupgrade.sh

Execute /u01/app/12.2.0.1/grid/rootupgrade.sh on the following nodes:
[db01, db02]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. 
When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/app/12.2.0.1/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/12.2.0.1/grid/install/response/gridsetup.rsp [-silent]
+++++++++++++++++++++++++++++++++++++++++++++++++++++++

#######################################
Run "rootupgrade.sh"
#######################################

(!!!) У нас 2 node RAC --- поэтому гнать его в параллель на двух нодах нельзя (!!!)
(!!!) В случае, если бы был 4 node RAC , например, то на 2ой и 3ей нодах можно было бы гнать в параллель, а затем отдельно на последней 4ой (!!!)

//
// Note that CRS will be stopped on the node you apply rootupgrade.sh on so your instances will suffer an outage during this operation. 
// Think about rebalancing your services accordingly to avoid any application downtime. 
//

>>> root@db01

cd /tmp
/u01/app/12.2.0.1/grid/rootupgrade.sh

~ 15 min

!!!
!!! An interesting thing to note here after a node is patched is that the softwareversion is now the target one (12.2) but the activeversion is still the old one (12.1).
!!! Indeed, the activeversion will be changed to 12.2 when applying rootupgrade.sh on the last node. 
!!!

crsctl query crs softwareversion <===== Should be 12.2.0.1
+++++
Oracle Clusterware version on node [db01] is [12.2.0.1.0]
+++++

crsctl query crs activeversion <===== Should be 12.1.0.2
+++++
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
+++++

>>> root@db02

cd /tmp
/u01/app/12.2.0.1/grid/rootupgrade.sh

~ 15 min

crsctl query crs softwareversion <===== Should be 12.2.0.1
+++++
Oracle Clusterware version on node [db02] is [12.2.0.1.0]
+++++

crsctl query crs activeversion <===== Should be 12.2.0.1 now too
+++++
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
+++++

>>> grid@db01

crsctl query crs softwareversion <===== Should be 12.2.0.1
+++++
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
+++++

crsctl query crs activeversion <===== Should be 12.2.0.1 now too
+++++
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
+++++

#######################################
Run "gridSetup.sh -executeConfigTools"
#######################################

>>> grid@db01

/u01/app/12.2.0.1/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/12.2.0.1/grid/install/response/gridsetup.rsp -silent

>>> grid@db02

/u01/app/12.2.0.1/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/12.2.0.1/grid/install/response/gridsetup.rsp -silent

#######################################
Check that GI is relinked with RDS
#######################################

//
// It is worth double checking that the new GI Home is properly relinked with RDS to avoid future performance issues. 
//

>>> grid@db01

/u01/app/12.2.0.1/grid/bin/skgxpinfo
+++++
udp
+++++

>>> grid@db02

/u01/app/12.2.0.1/grid/bin/skgxpinfo
+++++
udp
+++++

!!!
!!! If not, relink the GI Home with RDS: 
!!!

Ref. to http://oracledbalogs.blogspot.com/2016/01/relink-oracle-binary-to-use-rds.html

(!!!) Прежде чем делать релинк лучше на данной ноде стопнуть все сервисы, а после релинка стартовать (!!!)

>>> root@db01

md5sum $ORACLE_HOME/lib/libskgxp12.so
+++++
f00c3883914ca2fe13613638c02806b1  /u01/app/12.2.0.1/grid/lib/libskgxp12.so
+++++

md5sum $ORACLE_HOME/lib/libskgxpr.so
+++++
a0f960f8b729c19396699b68d9786c88  /u01/app/12.2.0.1/grid/lib/libskgxpr.so
+++++

(!!!) The results should be the same if already linked to use RDS. If not the same then relink required. (!!!)

make -C /u01/app/12.2.0.1/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
md5sum $ORACLE_HOME/lib/libskgxp12.so
+++++
a0f960f8b729c19396699b68d9786c88  /u01/app/12.2.0.1/grid/lib/libskgxp12.so
+++++

md5sum $ORACLE_HOME/lib/libskgxpr.so
+++++
a0f960f8b729c19396699b68d9786c88  /u01/app/12.2.0.1/grid/lib/libskgxpr.so
+++++

(!!!) Should be same now (!!!)

/u01/app/12.2.0.1/grid/bin/skgxpinfo
+++++
rds
+++++

>>> root@db02

md5sum $ORACLE_HOME/lib/libskgxp12.so
+++++
f00c3883914ca2fe13613638c02806b1  /u01/app/12.2.0.1/grid/lib/libskgxp12.so
+++++

md5sum $ORACLE_HOME/lib/libskgxpr.so
+++++
a0f960f8b729c19396699b68d9786c88  /u01/app/12.2.0.1/grid/lib/libskgxpr.so
+++++

make -C /u01/app/12.2.0.1/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
md5sum $ORACLE_HOME/lib/libskgxp12.so
+++++
a0f960f8b729c19396699b68d9786c88  /u01/app/12.2.0.1/grid/lib/libskgxp12.so
+++++

md5sum $ORACLE_HOME/lib/libskgxpr.so
+++++
a0f960f8b729c19396699b68d9786c88  /u01/app/12.2.0.1/grid/lib/libskgxpr.so
+++++

/u01/app/12.2.0.1/grid/bin/skgxpinfo
+++++
rds
+++++

==============================================
Check the status of the cluster:
==============================================

>>> grid@ db01 || db02

crsctl check cluster -all
+++++
**************************************************************
db01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
db02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
+++++

==============================================
Set Flex ASM Cardinality to "ALL":
==============================================

//
// Starting release 12.2 ASM will be configured as "Flex ASM". 
// By default Flex ASM cardinality is set to 3. 
// This means configurations with four or more database nodes in the cluster might only see ASM instances on three nodes. 
// Nodes without an ASM instance running on it will use an ASM instance on a remote node within the cluster. 
// Only when the cardinality is set to "ALL", ASM will bring up the additional instances required to fulfill the cardinality setting. 
//

>>> grid@ db01 / db02

srvctl config asm
+++++
ASM home: <CRS home>
Password file: +DBFS_DG/info-cluster/orapwASM
Backup of Password file: +DBFS_DG/orapwASM_backup
ASM listener: LISTENER
ASM instance count: ALL
Cluster ASM listener: ASMNET1LSNR_ASM
+++++

#srvctl modify asm -count ALL    =====> итак выставлено в ALL, не делал

==============================================
Update compatible.asm to 12.2:
==============================================

//
// Now that ASM 12.2 is running, it is recommended to update the compatible.asm to 12.2 to be able to enjoy the 12.2 new features. 
//

>>> grid@db01

SQL> set lines 222
col COMPATIBILITY for a35
select name, COMPATIBILITY from v$asm_diskgroup;
+++++
NAME                           COMPATIBILITY
------------------------------ -----------------------------------
DATA_INFO                      11.2.0.3.0
DBFS_DG                        12.1.0.2.0
RECO_INFO                      11.2.0.3.0
SPARSE                         12.1.0.2.0
+++++

SQL> ALTER DISKGROUP DATA_INFO SET ATTRIBUTE 'compatible.asm' = '12.2.0.1.0';
SQL> ALTER DISKGROUP DBFS_DG SET ATTRIBUTE 'compatible.asm' = '12.2.0.1.0';
SQL> ALTER DISKGROUP RECO_INFO SET ATTRIBUTE 'compatible.asm' = '12.2.0.1.0';
SQL> ALTER DISKGROUP SPARSE SET ATTRIBUTE 'compatible.asm' = '12.2.0.1.0';

SQL> select name, COMPATIBILITY from v$asm_diskgroup ;
+++++
NAME                           COMPATIBILITY
------------------------------ -----------------------------------
DATA_INFO                      12.2.0.1.0
DBFS_DG                        12.2.0.1.0
RECO_INFO                      12.2.0.1.0
SPARSE                         12.2.0.1.0
+++++

==============================================
Update the Inventory:
==============================================

>>> grid@db01

$ORACLE_HOME/oui/bin/runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/12.2.0.1/grid "CLUSTER_NODES={db01,db02}" CRS=true LOCAL_NODE=db01

==============================================
Edit "/etc/oratab" to new 12.2 Home:
==============================================

>>> grid@ db01 / db02

vi /etc/oratab

==============================================
Modify env files:
==============================================

>>> grid@ db01 / db02

mv ASM1.env 12102_ASM1.env
mv ASM1_12201.env ASM1.env

mv ASM2.env 12102_ASM2.env
mv ASM2_12201.env ASM2.env

>>> root@ db01 / db02

vi .profile_crs

==============================================
Disable Diagsnap for Exadata:
==============================================

>>> grid@ db01 / db02

//
// Due to unpublished bugs 24900613 25785073 and 25810099, Diagsnap should be disabled for Exadata.
//

cd $ORACLE_HOME/bin
./oclumon manage -disable diagsnap
+++++
Diagsnap option is successfully Disabled on db01
Diagsnap option is successfully Disabled on db02
Successfully Disabled diagsnap
+++++

==============================================
Modify the dbfs_mount cluster resource:
==============================================

//
// Update the mount-dbfs.sh script and the ACTION_SCRIPT attribute of the dbfs-mount cluster resource to refer to the new location of mount-dbfs.sh. 
//

>>> root@ db01 / db02

(!!!) Нужно подкорректировать значение "GRID_HOME" (!!!)

vi /etc/oracle/mount-dbfs.conf

Источники:


12.2 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.3 and later on Oracle Linux (Doc ID 2111010.1)
https://unknowndba.blogspot.com/2018/11/upgrade-grid-infrastructure-to-122.html