I was running low and the volume was showing 100% used space. It was 40GB in size volume.
I increased the volume size from 40GB to 50GB from the AWS Console.
But then to extend it, I ran some commands as given in the [instructions page of AWS][1]
but unfortunately got no space left to extend the partition, then I found another guide to resolve my issue, but maybe I chose the wrong partition # and got into a problem.
Now I don't know what to do next to fix the issue. The volume size in the AWS appears 50GB. But in the server, the volume size is still showing 40GB.
History of commands I ran
233 df -hT
234 lsblk
235 sudo growpart /dev/nvme0n1p1 1
236 df -h
237 lsblk
238 lsblk -f
239 sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
240 sudo growpart /dev/nvme0n1p1 1
241 sudo growpart /dev/nvme0n1 1
242 df -h
243 lsblk
244 sudo resize2fs /dev/nvme0n1 1
245 sudo resize2fs /dev/nvme0n1p1 1
246 df -h
247 lsblk
248 sudo growpart /dev/nvme0n1 1
249 sudo resize2fs /dev/nvme0n1 1
250 df -h
251 lsblk
252 df -hT
253 lsblk
254 sudo growpart nvme0n1 1
255 lsblk
256 lslbk
257 lsblk
258 df -hT
259 lsblk
260 lsblk -f
261 sudo growpart /dev/nvme0n1 1
262 sudo resize2fs /dev/nvme0n1
263 lsblk
264 df -hT
265 sudo umount /tmp
266 df -hT
267 sudo growpart /dev/nvme01 1
268 sudo growpart /dev/nvme0n1 1
269 history
270 df -hT
271 lsblk
272 sudo growpart /dev/nvme0n1 1
273 sudo resize2fs /dev/nvme0n1
274 history
Output of df -hT
ubuntu@ip-192-168-00-00:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/root ext4 39G 36G 2.9G 93% /
devtmpfs devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs tmpfs 1.6G 1.1M 1.6G 1% /run
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop2 squashfs 27M 27M 0 100% /snap/amazon-ssm-agent/5163
/dev/loop10 squashfs 68M 68M 0 100% /snap/lxd/22526
/dev/loop9 squashfs 56M 56M 0 100% /snap/core18/2344
/dev/loop3 squashfs 68M 68M 0 100% /snap/lxd/22753
/dev/loop11 squashfs 56M 56M 0 100% /snap/core18/2409
/dev/loop8 squashfs 26M 26M 0 100% /snap/amazon-ssm-agent/5656
/dev/loop12 squashfs 62M 62M 0 100% /snap/core20/1494
/dev/loop5 squashfs 62M 62M 0 100% /snap/core20/1518
/dev/loop0 squashfs 47M 47M 0 100% /snap/snapd/16010
/dev/loop6 squashfs 47M 47M 0 100% /snap/snapd/16292
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1000
Output of lsblk:
ubuntu@ip-192-168-192-236:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 47M 1 loop /snap/snapd/16010
loop2 7:2 0 26.7M 1 loop /snap/amazon-ssm-agent/5163
loop3 7:3 0 67.8M 1 loop /snap/lxd/22753
loop4 7:4 0 61.9M 1 loop
loop5 7:5 0 61.9M 1 loop /snap/core20/1518
loop6 7:6 0 47M 1 loop /snap/snapd/16292
loop7 7:7 0 43.6M 1 loop
loop8 7:8 0 25.1M 1 loop /snap/amazon-ssm-agent/5656
loop9 7:9 0 55.5M 1 loop /snap/core18/2344
loop10 7:10 0 67.9M 1 loop /snap/lxd/22526
loop11 7:11 0 55.5M 1 loop /snap/core18/2409
loop12 7:12 0 61.9M 1 loop /snap/core20/1494
nvme0n1 259:0 0 50G 0 disk
└─nvme0n1p1 259:1 0 50G 0 part /
Output of growpart:
ubuntu@ip-192-168-192-236:~$ sudo growpart /dev/nvme0n1 1
NOCHANGE: partition 1 is size 104855519. it cannot be grown
Output of resize2fs:
ubuntu@ip-192-168-192-236:~$ sudo resize2fs /dev/nvme0n1
resize2fs 1.45.5 (07-Jan-2020)
resize2fs: Device or resource busy while trying to open /dev/nvme0n1
Couldn't find valid filesystem superblock.