66

I'm running an AWS EC2 Ubuntu instance with EBS storage initially of 8GB.

This is now 99.8% full, so I've followed AWS documentation instructions to increase the EBS volume to 16GB. I now need to extend my partition /dev/xvda1 to 16GB, but when I run the command

$ growpart /dev/xvda 1

I get the error

mkdir: cannot create directory ‘/tmp/growpart.2626’: No space left on device

I have tried

  1. rebooting the instance
  2. stopping the instance, and mounting a newly created EBS volume of size 16GB based on a snapshot of the old 8GB volume
  3. running docker system prune -a (resulting in a "Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running?" error. When I try to start the daemon using sudo dockerd, I get a "no space left on device" error as well)
  4. running resize2fs /dev/xvda1

all to no avail.

Running lsblk returns

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0     7:0    0   89M  1 loop /snap/core/7713
loop1     7:1    0   18M  1 loop /snap/amazon-ssm-agent/1480
loop2     7:2    0 89.1M  1 loop /snap/core/7917
loop3     7:3    0   18M  1 loop /snap/amazon-ssm-agent/1455
xvda    202:0    0   16G  0 disk
└─xvda1 202:1    0    8G  0 part /

df -h returns

Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           395M   16M  379M   4% /run
/dev/xvda1      7.7G  7.7G     0 100% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0       90M   90M     0 100% /snap/core/7713
/dev/loop1       18M   18M     0 100% /snap/amazon-ssm-agent/1480
/dev/loop2       90M   90M     0 100% /snap/core/7917
/dev/loop3       18M   18M     0 100% /snap/amazon-ssm-agent/1455
tmpfs           395M     0  395M   0% /run/user/1000

and df -i returns

Filesystem      Inodes  IUsed  IFree IUse% Mounted on
udev            501743    296 501447    1% /dev
tmpfs           504775    457 504318    1% /run
/dev/xvda1     1024000 421259 602741   42% /
tmpfs           504775      1 504774    1% /dev/shm
tmpfs           504775      3 504772    1% /run/lock
tmpfs           504775     18 504757    1% /sys/fs/cgroup
/dev/loop0       12827  12827      0  100% /snap/core/7713
/dev/loop1          15     15      0  100% /snap/amazon-ssm-agent/1480
/dev/loop2       12829  12829      0  100% /snap/core/7917
/dev/loop3          15     15      0  100% /snap/amazon-ssm-agent/1455
tmpfs           504775     10 504765    1% /run/user/1000
llamarama
  • 835
  • 1
  • 6
  • 11
  • Did you try it as a root user? After getting this error I used `sudo su` and follow this process https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html – Emran Feb 16 '21 at 09:18

5 Answers5

213

For anyone that has this problem, here's a link to the answer: https://aws.amazon.com/premiumsupport/knowledge-center/ebs-volume-size-increase/

Summary

  1. Run df -h to verify your root partition is full (100%)
  2. Run lsblk and then lsblk -f to get block device details
  3. sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
  4. sudo growpart /dev/DEVICE_ID PARTITION_NUMBER
  5. lsblk to verify partition has expanded
  6. sudo resize2fs /dev/DEVICE_IDPARTITION_NUMBER
  7. Run df -h to verify your resized disk
  8. sudo umount /tmp
DeliciousElephant8
  • 2,148
  • 1
  • 6
  • 5
  • 2
    in my case step#3 was the key `sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp` and in case of step#6 didn't worked at all, i had to reboot the instance itself to make it work. – d1jhoni1b Mar 26 '21 at 17:58
  • 1
    How do you find out DEVICE_ID and PARTITION_NUMBER? My device is `nvme0n1p2` so I just guessed it to be `nvme0n1` and `2` respectively, and it seemed to work. But now I can't get the `resize2fs` command to work with those values. – David Callanan Sep 16 '21 at 09:49
  • Edit: the reason I can't use `resize2f` is because it isn't installed, and I don't have storage space to install it. – David Callanan Sep 16 '21 at 09:56
  • 1
    Thanks, this worked! I had to run `sudo apt-get autoclean` and `sudo apt-get autoremove` to get a little space and then I could run step 3. After step 8, I had to reboot `sudo reboot`. – Eli Holmes Dec 03 '21 at 01:10
  • Getting Temporary failure in name resolution at #3 – Datadimension May 15 '22 at 23:11
  • I wish I could save this comment in my profile somehow. – Marco da Fonseca Jun 03 '22 at 05:11
  • This worked for me! But I had to run `sudo xfs_growfs -d /` in step 6 since my file system is XFS-type – Jorge Ribeiro May 01 '23 at 04:49
  • What does `3` do ? Isn't `/tmp` already mounted under `tmpfs` ? – Hritik May 29 '23 at 14:59
11

Just make sure to clear tmp folders before running the command growpart /dev/xvda 1 by running this other command sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp that should do the trick.

Here is the full recap on resizing EBS volume:

Run df -h to verify your disk is full (100%)

/dev/xvda1 8.0G 8.0G 20K 100% /

Run lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  20G  0 disk
`-xvda1 202:1    0   8G  0 part /

Clear tmp folders a little bit

sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp

And finally increase space in the partition

sudo growpart /dev/xvda 1

CHANGED: partition=1 start=4096 old: size=16773087 end=16777183 new: size=41938911 end=41943007

And finally do a sudo reboot wait for the instance to fully reload, ssh into the instance and run df -h should show the new space added:

/dev/xvda1       20G  8.1G   12G  41% /

Notice the new available space, and see how it's not full anymore (not at 100% now it's at 41%)

d1jhoni1b
  • 7,497
  • 1
  • 51
  • 37
4

I came across this article http://www.daniloaz.com/en/partitioning-and-resizing-the-ebs-root-volume-of-an-aws-ec2-instance/ and solved it with ideas from there.

Steps taken:

  1. Note down root device (e.g. /dev/sda1)
  2. Stop instance
  3. Detach root EBS volume and then modify volume size if you haven't already
  4. Create an auxiliary instance (e.g. a t2.micro instance, or use an existing one if you wish)
  5. Attach the volume from step 2 to the auxiliary instance (doesn't matter which device)
  6. In the auxiliary instance, run lsblk to ensure the volume has been mounted correctly
  7. sudo growpart /dev/xvdf 1 (or similar, to expand the partition)
  8. lsblk to check that the partition has grown
  9. Detach the volume
  10. Attach the volume to your original instance, with device set to the one you noted down in Step 1
  11. Start the instance and then SSH into it
  12. If you still get the message "Usage of /: 99.8% of X.XX GB", run df -h to check the size of your root volume partition (e.g. /dev/xvda1)
  13. Run sudo resize2fs /dev/xvda1 (or similar) to resize your partition
  14. Run df -h to check that your Use% of /dev/xvda1 is no longer ~100%
llamarama
  • 835
  • 1
  • 6
  • 11
1

Firstly I delete cache and unnecessary files.

sudo apt-get autoclean
sudo apt-get autoremove

After that, I follow this blog:

https://devopsmyway.com/how-to-extend-aws-ebs-volume-with-zero-downtime/

Azametzin
  • 5,223
  • 12
  • 28
  • 46
  • 5
    ❯ apt autoremove Reading package lists... Error! E: Write error - write (28: No space left on device) E: IO Error saving source cache E: The package lists or status file could not be parsed or opened. – doplumi Nov 10 '20 at 12:08
  • You can't `sudo apt-get autoclean` because no space left. :) – Vladimir Obrizan Apr 22 '22 at 20:02
1

I had the same issue, however, my mongo was in a AWS EC2 instante running a CentOS 7, so I had to take a few different steps. For those like me reading this, try to follow:

  1. Increase the EBS Volume using the AWS Panel;
  2. Extend the file system of NVMe EBS volume using this tutorial from AWS: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html;
  3. As your database hit 100%, so maybe is necessary to use this command before to create a temporary folder for logs: sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp;
  4. Remove the mongod.lock. To find this file, look for the dbPath inside the /etc/mongod.conf;
  5. Repair the database using this: sudo mongod --repair /dbPath/from/mongodconf. Depending on the database size, this repair might take some hours to complete. Mine took 3 hours.
  6. sudo chown -R mongod:mongod /dbPath/from/mongoconf to make mongod own the folder again;
  7. Restart mongo: sudo service mongod restart;
  8. After that, your database might connect again;

Hope it helps.

Huander Tironi
  • 329
  • 4
  • 7