Introduction
By default Oracle ACS configure root file system with 30GB space on Exadata Computed nodes X2 and above. In most of the cases this space is sufficient to store operating system, Exadata software, log file and diagnostic files. Over time if you store patches, software and log files are not purged this space will be filled faster. Exadata X2 and above uses volume group and it is easy to extend the logical volume space on which the root file system is mounted.
Root file system is created on two system partitions LVDb Sys1 and LVDb Sys2 and both system partitions must be size equally at the same time. Only one system partition is active at any time and other is inactive.
In this article, I will demonstrate how you can extend root file system size on Exadata Compute nodes online without any downtime.
Environemt
Exadata X5-2 Half Rack
Exadata storage software version 12.1.2.3.4
Current Root File System Allocation
[root@exa01db01 ~]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 25G 3.5G 88% /
List Logical Volume and It's Details
lvm> lvs
lvm> lvs -o lv_name,lv_path,vg_name,lv_size
LV Path VG LSize
LVDbOra1 /dev/VGExaDb/LVDbOra1 VGExaDb 200.00g
LVDbSwap1 /dev/VGExaDb/LVDbSwap1 VGExaDb 24.00g
LVDbSys1 /dev/VGExaDb/LVDbSys1 VGExaDb 30.00g
LVDbSys2 /dev/VGExaDb/LVDbSys2 VGExaDb 30.00g
perflv /dev/VGExaDb/perflv VGExaDb 5.00g
Get the Current Active System Partition
[root@exa01db01 ~]# imageinfo
Kernel version: 2.6.39-400.294.1.el6uek.x86_64 #1 SMP Wed Jan 11 08:46:38 PST 2017 x86_64
Image kernel version: 2.6.39-400.294.1.el6uek
Image version: 12.1.2.3.4.170111
Image activated: 2017-04-08 12:14:23 -0500
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1
Steps to Increase Root File System on Compute Nodes:
- Get the Current Root File System Utilization
[root@exa01db01 ~]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 25G 3.5G 88% /
- Get Current Logical Volume Configuration
lvm> lvs -o lv_name,lv_path,vg_name,lv_size
LV Path VG LSize
LVDbOra1 /dev/VGExaDb/LVDbOra1 VGExaDb 200.00g
LVDbSwap1 /dev/VGExaDb/LVDbSwap1 VGExaDb 24.00g
LVDbSys1 /dev/VGExaDb/LVDbSys1 VGExaDb 30.00g
LVDbSys2 /dev/VGExaDb/LVDbSys2 VGExaDb 30.00g
perflv /dev/VGExaDb/perflv VGExaDb 5.00g
- Ensure Root File System Can be Resized Online
[root@exa01db01 ~]# tune2fs -l /dev/mapper/VGExaDb-LVDbSys1 | grep resize_inode
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
All Nodes:
[root@exa01db01 ~]# dcli -g dbs_group -l root 'tune2fs -l /dev/mapper/VGExaDb-LVDbSys1 | grep resize_inode'exa01db01: Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeexa01db02: Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeexa01db03: Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeexa01db04: Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
- Get the free space available in the Volume Group
lvm> vgdisplay -s
"VGExaDb" 1.63 TiB [295.00 GiB used / 1.34 TiB free]
- Extend both logical volumes using lvextend command. Here we are extending the root file system by 50GB, so the file system become 80GB in total.
[root@exa01db01 ~]# lvextend -L +50G /dev/VGExaDb/LVDbSys1
Size of logical volume VGExaDb/LVDbSys1 changed from 30.00 GiB (7680 extents) to 80.00 GiB (20480 extents).
Logical volume LVDbSys1 successfully resized
[root@exa01db01 ~]# lvextend -L +50G /dev/VGExaDb/LVDbSys2
Size of logical volume VGExaDb/LVDbSys2 changed from 30.00 GiB (7680 extents) to 80.00 GiB (20480 extents).
Logical volume LVDbSys2 successfully resized
- Now resize the file system using resize2fs command.
[root@exa01db01 ~]# resize2fs /dev/VGExaDb/LVDbSys1
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/VGExaDb/LVDbSys1 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 4
Performing an on-line resize of /dev/VGExaDb/LVDbSys1 to 15728640 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys1 is now 15728640 blocks long.
[root@exa01db02 ~]# e2fsck -f /dev/VGExaDb/LVDbSys1
e2fsck 1.43-WIP (20-Jun-2013)
/dev/VGExaDb/LVDbSys1 is mounted.
e2fsck: Cannot continue, aborting.
The resize command ro LVDbSys2 is failed as it is inactive. So we must execute the fsck first before resizing.
[root@exa01db01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Please run 'e2fsck -f /dev/VGExaDb/LVDbSys2' first.
[root@exa01db01 ~]# e2fsck -f /dev/VGExaDb/LVDbSys2
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VGExaDb/LVDbSys2: 122199/3932160 files (0.3% non-contiguous), 5496667/7864320 blocks
Now execute the resize file system again
[root@exa01db01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/VGExaDb/LVDbSys2 to 15728640 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys2 is now 15728640 blocks long.
- Validate the root file system
[root@exa01db01 ~]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
80G 25G 55G 31% /
Conclusion
In this article we have demonstrated how to resize root file system on Exadata Compute node online without any outage. It is important to note that root file system is create on two system partitions for high availability.
No comments:
Post a Comment