Pages

Saturday, January 11, 2020

OCFS2 (Oracle Cluster File System) #3: Resize volume (offline / NO data loss)

В данной заметке описан процесс расширения ocfs2 раздела



===================================================
0. Check before:
===================================================

>>> root@srv-ocfs2-node1 / srv-ocfs2-node2

$ /etc/init.d/o2cb status
++++++++++
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Stack glue driver: Loaded
Stack plugin "o2cb": Loaded
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster "testCluster": Online
  Heartbeat dead threshold: 31
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
  Heartbeat mode: Local
Checking O2CB heartbeat: Active
Debug file system at /sys/kernel/debug: mounted
++++++++++

$ df -h /u01/
++++++++++
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1        10G  283M  9.8G   3% /u01
++++++++++

$ ls -ltr /u01/*
++++++++++
-rw-r--r--. 1 root root    0 Jan 11 23:40 /u01/123
-rw-r--r--. 1 root root    0 Jan 11 23:41 /u01/789
-rw-r--r--. 1 root root    0 Jan 11 23:41 /u01/456
-rw-r--r--. 1 root root    0 Jan 11 23:56 /u01/123n2
-rw-r--r--. 1 root root    0 Jan 11 23:56 /u01/456n2
-rw-r--r--. 1 root root    0 Jan 11 23:56 /u01/789n2
++++++++++

===================================================
1. Backup all data on the filesystem:
===================================================

Repartitioning a disk/device is a destructive process that may result in complete loss of volume data.
Backup the contents of the volume to be resized before proceeding.

===================================================
2. Globally unmount the filesystem to be resized:
===================================================

Note: OCFS2 filesystem resize can only be performed with the filesystem unmounted from all cluster nodes.
Unmount the filesystem to be resized on all cluster nodes. Use the mounted.ocfs2(8) command to verify if any nodes still have the volume mounted e.g.:

>>> root@srv-ocfs2-node1

$ mounted.ocfs2 -f
++++++++++
Device     Stack  Cluster  F  Nodes
/dev/sdb1  o2cb               srv-ocfs2-node1.oracle.com, srv-ocfs2-node2.oracle.com
++++++++++

>>> root@srv-ocfs2-node1 / srv-ocfs2-node2

$ umount /u01

>>> root@srv-ocfs2-node1

$ mounted.ocfs2 -f
++++++++++
Device     Stack  Cluster  F  Nodes
/dev/sdb1  o2cb               Not mounted
++++++++++

===================================================
3. Perform a filesystem check:
===================================================

Note: Before resizing the device or filesystem, perform a filesystem check. The following denotes a forced filesystem check without repair.

>>> root@srv-ocfs2-node1

$ fsck.ocfs2 -fn /dev/sdb1
++++++++++
fsck.ocfs2 1.8.6
Checking OCFS2 filesystem in /dev/sdb1:
  Label:              <NONE>
  UUID:               55CC71D5B1E946339E80F38CA46BB2B1
  Number of blocks:   2620928
  Block size:         4096
  Number of clusters: 2620928
  Cluster size:       4096
  Number of slots:    4

** Skipping slot recovery because -n was given. **
/dev/sdb1 was run with -f, check forced.
Pass 0a: Checking cluster allocation chains
Pass 0b: Checking inode allocation chains
Pass 0c: Checking extent block allocation chains
Pass 1: Checking inodes and blocks
Pass 2: Checking directory entries
Pass 3: Checking directory connectivity
Pass 4a: Checking for orphaned inodes
Pass 4b: Checking inodes link counts
All passes succeeded.
++++++++++

Ensure to correct/repair any issues before proceeding e.g.

$ fsck.ocfs2 -fy /dev/sdb1
++++++++++
fsck.ocfs2 1.8.6
Checking OCFS2 filesystem in /dev/sdb1:
  Label:              <NONE>
  UUID:               55CC71D5B1E946339E80F38CA46BB2B1
  Number of blocks:   2620928
  Block size:         4096
  Number of clusters: 2620928
  Cluster size:       4096
  Number of slots:    4

/dev/sdb1 was run with -f, check forced.
Pass 0a: Checking cluster allocation chains
Pass 0b: Checking inode allocation chains
Pass 0c: Checking extent block allocation chains
Pass 1: Checking inodes and blocks
Pass 2: Checking directory entries
Pass 3: Checking directory connectivity
Pass 4a: Checking for orphaned inodes
Pass 4b: Checking inodes link counts
All passes succeeded.
++++++++++

===================================================
4. Resize the underlying device/partition:
===================================================

Note: The underlying LUN/device/partition upon which the filesystem resides must first be resized before the filesystem itself can be resized. The method, and therefore the commands required, to resize the device will differ depending upon the storage solution used. Note, as of OCFS2 version 1.2.6-1 (the latest available at time of writing) OCFS2 volumes may be resized larger, but not smaller.

>>> root@srv-ocfs2-node1

-> Check before resize

$ lsblk | grep -i sdb
++++++++++
sdb                                   8:16   0   10G  0 disk
  sdb1                                8:17   0   10G  0 part
++++++++++

-> After LUN resize

$ {
echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
}

$ lsblk | grep -i sdb
+++++
sdb                                   8:16   0   15G  0 disk
  sdb1                                8:17   0   10G  0 part
+++++

Note: When using OCFS2 filesystems on partitioned devices, always ensure to recreate the partition using the original Start cylinder and larger End cylinder.

Note: In this case I will use "parted" utility, because in nowadays we are usually working with TB's lun sizes. Unfortunately "fdisk" cannot work with such a big sizes (bigger than 2TB).

$ parted /dev/sdb
+++++
GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 16.1GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  10.7GB  10.7GB               primary

(parted) rm 1
(parted) mkpart primary 1049kB 15GB
(parted) print
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 16.1GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  15.0GB  15.0GB               primary

(parted) quit
Information: You may need to update /etc/fstab.
+++++

$ lsblk | grep -i sdb
+++++
sdb                                   8:16   0   15G  0 disk
  sdb1                                8:17   0   14G  0 part
+++++

===================================================
5. Re-scan iSCSI on other cluster nodes:
===================================================

>>> root@srv-ocfs2-node2

$ {
echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
}

$ lsblk | grep -i sdb
+++++
sdb                                   8:16   0   15G  0 disk
  sdb1                                8:17   0   14G  0 part
+++++

===================================================
6. Resize the OCFS2 filesystem:
===================================================

Note: Once the device/partition has been resized and it's new size visible to all cluster nodes, resize the OCFS2 filesystem using the "tunefs.ocfs2" command.

>>> root@srv-ocfs2-node1

$ tunefs.ocfs2 -S /dev/sdb1

> No any kind of output in my case

===================================================
7. Perform filesystem check:
===================================================

Note: Having resized the OCFS2 partition, perform another filesystem check before attempting to mount the resized volume.

>>> root@srv-ocfs2-node1

$ fsck.ocfs2 -fn /dev/sdb1
++++++++++
fsck.ocfs2 1.8.6
Checking OCFS2 filesystem in /dev/sdb1:
  Label:              <NONE>
  UUID:               55CC71D5B1E946339E80F38CA46BB2B1
  Number of blocks:   3661824
  Block size:         4096
  Number of clusters: 3661824
  Cluster size:       4096
  Number of slots:    4

** Skipping slot recovery because -n was given. **
/dev/sdb1 was run with -f, check forced.
Pass 0a: Checking cluster allocation chains
Pass 0b: Checking inode allocation chains
Pass 0c: Checking extent block allocation chains
Pass 1: Checking inodes and blocks
Pass 2: Checking directory entries
Pass 3: Checking directory connectivity
Pass 4a: Checking for orphaned inodes
Pass 4b: Checking inodes link counts
All passes succeeded.
++++++++++

===================================================
8. Mount the resized OCFS2 volume:
===================================================

Remount the resized OCFS2 filesystem on all cluster nodes.

>>> root@srv-ocfs2-node1 / srv-ocfs2-node2

$ mounted.ocfs2 -f
+++++
Device     Stack  Cluster  F  Nodes
/dev/sdb1  o2cb               Not mounted
+++++

$ mount /u01
$ df -h /u01
+++++
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1        14G  283M   14G   2% /u01
+++++

$ mounted.ocfs2 -f
+++++
Device     Stack  Cluster  F  Nodes
/dev/sdb1  o2cb               srv-ocfs2-node1.oracle.com, srv-ocfs2-node2.oracle.com
+++++

$ ls -ltr /u01/*
+++++
-rw-r--r--. 1 root root    0 Jan 11 23:40 /u01/123
-rw-r--r--. 1 root root    0 Jan 11 23:41 /u01/789
-rw-r--r--. 1 root root    0 Jan 11 23:41 /u01/456
-rw-r--r--. 1 root root    0 Jan 11 23:56 /u01/123n2
-rw-r--r--. 1 root root    0 Jan 11 23:56 /u01/456n2
-rw-r--r--. 1 root root    0 Jan 11 23:56 /u01/789n2
+++++

===================================================
* Sources:
===================================================

1352663.1 - How to dynamically resize a SAN disk and OCFS2 volume
445082.1  - How to resize an OCFS2 filesystem