118

I have followed the steps for resizing an EC2 volume

  1. Stopped the instance
  2. Took a snapshot of the current volume
  3. Created a new volume out of the previous snapshot with a bigger size in the same region
  4. Deattached the old volume from the instance
  5. Attached the new volume to the instance at the same mount point

Old volume was 5GB and the one I created is 100GB Now, when i restart the instance and run df -h I still see this

Filesystem            Size  Used Avail Use% Mounted on
/dev/xvde1            4.7G  3.5G 1021M  78% /
tmpfs                 296M     0  296M   0% /dev/shm

This is what I get when running

sudo resize2fs /dev/xvde1

The filesystem is already 1247037 blocks long.  Nothing to do!

If I run cat /proc/partitions I see

 202       64  104857600 xvde
 202       65    4988151 xvde1
 202       66     249007 xvde2

From what I understand if I have followed the right steps xvde should have the same data as xvde1 but I don't know how to use it

How can I use the new volume or umount xvde1 and mount xvde instead?

I cannot understand what I am doing wrong

I also tried sudo ifs_growfs /dev/xvde1

xfs_growfs: /dev/xvde1 is not a mounted XFS filesystem

By the way, this a linux box with centos 6.2 x86_64

starball
  • 20,030
  • 7
  • 43
  • 238
Wilman Arambillete
  • 1,411
  • 2
  • 11
  • 12

20 Answers20

380

There's no need to stop instance and detach EBS volume to resize it anymore!

13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"

The process works even if the volume to extend is the root volume of running instance!


Say we want to increase boot drive of Ubuntu from 8G up to 16G "on-the-fly".

step-1) login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button

enter image description here

enter image description here

enter image description here


step-2) ssh into the instance and resize the partition:

let's list block devices attached to our box:
lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  16G  0 disk
└─xvda1 202:1    0   8G  0 part /

As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume. Let's use "growpart" to resize 8G partition up to 16G:

# install "cloud-guest-utils" if it is not installed already
apt install cloud-guest-utils

# resize partition
growpart /dev/xvda 1

Let's check the result (you can see /dev/xvda1 is now 16G):

lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  16G  0 disk
└─xvda1 202:1    0  16G  0 part /

Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.


step-3) resize file system to grow all the way to fully use new partition space
# Check before resizing ("Avail" shows 1.1G):
df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  6.3G  1.1G  86% /

# resize filesystem
resize2fs /dev/xvda1

# Check after resizing ("Avail" now shows 8.7G!-):
df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       16G  6.3G  8.7G  42% /

So we have zero downtime and lots of new space to use.
Enjoy!

Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.

Dmitry Shevkoplyas
  • 6,163
  • 3
  • 27
  • 28
  • resizing partition was big help....!! The most wonderful thing was it worked even for root volume. – piyushmandovra Mar 21 '17 at 17:19
  • 4
    Will someone please accept this as the correct answer? Just because... it is. – eduardohl Apr 05 '17 at 06:17
  • Solid, thanks a ton... I was so sceptical in performing the steps mentioned in above answers but this was so damn cool ... – AAgg Apr 16 '17 at 06:01
  • 4
    Huh, the official docs don't mention growpart, which is why I couldn't get this to work before. Thanks! – Ibrahim May 19 '17 at 18:31
  • Thnak you so much....in the aws tutorial there is no mention to this command "growpart /dev/xvda 1" :(...you are a savior sir! – Kim Aragon Escobar May 30 '17 at 18:39
  • I had to reboot the instance after the `growpart` and before the `resize2fs` commands, otherwise this worked awesomely. Thanks. – Mike Purcell Jun 23 '17 at 20:00
  • Depending on your system, you might need `sudo` in front of `growpart` and `resize2fs` commands. By the way this should be the accepted answer! – WoLfPwNeR Jul 06 '17 at 21:01
  • Amazing answer, the I followed the AWS docs, but it didn't work. It was missing a step! – juan Isaza Jul 13 '17 at 16:07
  • This helped me solve my issue on a XenServer install. I had increased the size of the disk in Xen but couldn't easily make the partition take the entire drive. Growpart was the key, then resize2fs. Thanks. – Peter Jul 21 '17 at 23:12
  • The growpart command is the one missing from AWS document. However, after I executed the growpart command, even though the return message did say the partition size has been changed, the lsblk or df -h commands still show the same result. It only took effect after I reboot the server. – Lan Aug 20 '17 at 18:29
  • If I already have data in existing volume and after extend the volume the old data will be there itself. Right? – Shihas Mar 08 '18 at 07:52
  • 1
    @Shihas, yes. That's the whole point. Even bootable "root" mounted drive can be increased safely without reboot required! – Dmitry Shevkoplyas Mar 08 '18 at 13:08
  • FYI, before `resize2fs` it asks to scan for errors using `e2fsck` (`Please run 'e2fsck -f /dev/xxxN' first.`) – Savvas Radevic Apr 23 '18 at 19:26
  • this is the best. – dshun May 17 '18 at 16:08
  • my ec2 instance is debian 8.3 (jessie). For me the command to get growpart was apt install cloud-utils (not cloud-guest-utils). – nettie Jan 28 '19 at 21:50
  • Why do we need `growpart` plus `resize2fs`? Man page isn't clear about this. – dz902 Jul 19 '21 at 03:43
  • @theaws.blog the growpart increases the disk partition size, but your filesystem will not automatically grow to use added space. Think of filesystem as a book with own "table of contents", which lists all the pages (taken or yet free/available). The book "filesystem" resides on the book shelf "partition" and it was a perfect fit before you resized the partition (increased the shelf). After "growpart" your shelf is wider, but book will not become thicker/bigger on it's own. By "resize2fs" we adding more blank pages into the book. Note: for XFS filesystem use "xfs_growfs" instead of "resize2fs". – Dmitry Shevkoplyas Jul 23 '21 at 16:05
  • @DmitryShevkoplyas Thanks! I think I confused partition table with filesystem. – dz902 Jul 24 '21 at 16:20
  • resize2fs /dev/xvda1 worked for mee, even when growpart can't find /dev/xvda1 part – Alex Mar 09 '23 at 10:57
73

Thank you Wilman your commands worked correctly, small improvement need to be considered if we are increasing EBSs into larger sizes

  1. Stop the instance
  2. Create a snapshot from the volume
  3. Create a new volume based on the snapshot increasing the size
  4. Check and remember the current's volume mount point (i.e. /dev/sda1)
  5. Detach current volume
  6. Attach the recently created volume to the instance, setting the exact mount point
  7. Restart the instance
  8. Access via SSH to the instance and run fdisk /dev/xvde

    WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u')

  9. Hit p to show current partitions

  10. Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
  11. Hit n to create a new partition
  12. Hit p to set it as primary
  13. Hit 1 to set the first cylinder
  14. Set the desired new space (if empty the whole space is reserved)
  15. Hit a to make it bootable
  16. Hit 1 and w to write changes
  17. Reboot instance OR use partprobe (from the parted package) to tell the kernel about the new partition table
  18. Log via SSH and run resize2fs /dev/xvde1
  19. Finally check the new space running df -h
GameScripting
  • 16,092
  • 13
  • 59
  • 98
dcf
  • 810
  • 7
  • 2
  • 1
    _"WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u')"_ This was not necessary for me (Ubuntu 13.04). It had already switched off DOS compatibility and used Sectors by default. Pressing `c` and `u` actually switched TO the deprecated modes. – wisbucky Oct 22 '13 at 00:04
  • 6
    The solution worked brilliant but the instance was stuck on "1/2 checks passed" with an exclamation sign (ReadHat 6.5). To fix this I have set the **"first cylinder" to 16** (like was previously). After that the instance started normal with "2/2 checks passed". Hope this helps someone... – user3586516 Apr 29 '14 at 18:23
  • 1
    I too had to change first cylinder, but I had to change it to 2048. I would recommend checking your current partition setting before deleting it. – Doyley Nov 10 '14 at 14:58
  • 1
    putty give connection error after reboot the instance – Ozan Jan 20 '15 at 14:38
  • 9
    After I rebooted my instance, I'm unable to connect via SSH. Connection times out and the aws console shows that it cannot start its Status Checks. I think it is dead. Any idea what to do? – Richard Jan 27 '15 at 05:48
  • I can't understand how the data (the OS) will not be lost with partition delete. Can you please explain me ? Thanks – user345602 Feb 21 '15 at 20:23
  • 1
    I am also facing what Richard is facing, can't connect via SSH and can't start the status check. It is dead, not sure how to proceed further :( – Nihilarian Sep 01 '15 at 12:38
  • On Centos I had to switch off DOS mode. I also switched to display units in sectors which made the lowest available when creating a partition 2048 as @Doyley mentioned. After following the rest of the instructions I then mounted the partition and ran xfs_growfs ( `mkdir test`, `mount /dev/sdh1 test/`, `xfs_growfs test/`, `umount test` ) - probably a better place to mount it than a random `test` folder but you get the idea. with dos mode on or the wrong sector you may get `wrong fs type, bad option, bad superblock` when trying to mount. double check the fdisk steps and try again. – cwd Sep 16 '15 at 14:53
  • Absolutely top answer, this worked perfectly for me - better than the AWS documentation. – bobmarksie Oct 03 '15 at 16:41
  • 1
    at the warning in **Step 8**, press **u** to move to change display units to sectors. Following that pressing **p** will show you start sector value (for me it was 2048) which can be used in Step 13 instead of putting 1 or 16 or any random number. Refer this - http://stackoverflow.com/questions/26770655/ec2-storage-attached-at-sda-is-dev-xvde1-cannot-resize – dhalsumit Jan 26 '16 at 10:01
  • Just ensure the start sector on the new partition is the same as the previous one you are deleting. This solved my problem with only 1/2 status checks being passed. – Yon Kornilov Aug 04 '16 at 17:51
  • Actually, I figured out don't go for delete partition option if show partition list values in the partition table, cause if the list is there then it literally deletes the partition, so data will be lost even if answer says "It won't delete". There is a way to do extend partition size, check at the bottom there are other utilities which will help in extending your partition size smoothly. – piyushmandovra Mar 21 '17 at 17:28
  • @Richard I have a workaround for your problem if you have any AMI backup of the machine which is dead, you can basically create a new volume from the AMI snapshot, after that stop machine deattach existing volume and reattached volume you created on same address and reboot. You should be able to access machine now. I got into the same problem and above solution worked for me, so felt like sharing with you. – piyushmandovra Mar 21 '17 at 17:35
  • 5
    This answer is now deprecated now that AWS supports online resizing for EBS volumes. – Dale C. Anderson Jul 06 '17 at 17:49
  • Wait! As noted, this answer is now deprecated - see the much simpler answer from Dmitry Shevkoplyas. It worked perfectly for me. – Russell G Feb 12 '19 at 21:41
52

Prefect comment by jperelli above.

I faced same issue today. AWS documentation does not clearly mention growpart. I figured out the hard way and indeed the two commands worked perfectly on M4.large & M4.xlarge with Ubuntu

sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
pacholik
  • 8,607
  • 9
  • 43
  • 55
Sachin Shintre
  • 566
  • 4
  • 3
  • the second answer for attaching and this answer is for resizing – Adiii May 31 '18 at 07:19
  • Amazing! worked on my t2.small instance. Whew. Thought it would be bloodier than that. Thanks! – publicknowledge Dec 06 '18 at 02:09
  • I can't seem to install cloud-guest-utils which contains growpart. Linux version 3.16.0-4-amd64 – nettie Jan 28 '19 at 21:30
  • I was facing same issue but after run sudo resize2fs /dev/xvda1 now its reflecting Thanks – Fawwad Dec 18 '20 at 07:00
  • So I keep running ```sudo resize2fs /dev/...``` on an Ubuntu machine that should just do this only to see that I have to use a toll called growpart! AWS was this necessary? You saved my day – George Udosen May 26 '22 at 12:27
16

[SOLVED]

This is what it had to be done

  1. Stop the instance
  2. Create a snapshot from the volume
  3. Create a new volume based on the snapshot increasing the size
  4. Check and remember the current's volume mount point (i.e. /dev/sda1)
  5. Detach current volume
  6. Attach the recently created volume to the instance, setting the exact mount point
  7. Restart the instance
  8. Access via SSH to the instance and run fdisk /dev/xvde
  9. Hit p to show current partitions
  10. Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
  11. Hit n to create a new partition
  12. Hit p to set it as primary
  13. Hit 1 to set the first cylinder
  14. Set the desired new space (if empty the whole space is reserved)
  15. Hit a to make it bootable
  16. Hit 1 and w to write changes
  17. Reboot instance
  18. Log via SSH and run resize2fs /dev/xvde1
  19. Finally check the new space running df -h

This is it

Good luck!

wisbucky
  • 33,218
  • 10
  • 150
  • 101
Wilman Arambillete
  • 1,411
  • 2
  • 11
  • 12
  • 1
    In Amazon EBS volumes it seems to be important to use the same mount point in resize2fs as you use with fdisk. df shows up something like /dev/xvda1 as the attached EBS volume, but the resize2fs command only worked for me when I used the /dev/sdf1 identifier, which I had used when I did the new partition in fdisk. – Garreth McDaid Feb 05 '14 at 17:26
  • This is in the AWS documentation. What is poor is their procedures are still incomplete after 3 years of this going on. If you have an image you can fall back, sure. It is always possible to temporarily hang the new disk from an instance running a desktop as well, but needing it to be mounted for a resize can be a problem if you were thinking of using gparted. gcloud resizes on the fly. – mckenzm Apr 23 '16 at 23:39
  • My storage device (/dev/xvda1) started at sector 16065, not sector 1. So step 13 (Hit 1 to set the first cylinder) had to be 16065 in my case. – Simon Paarlberg Jan 16 '17 at 21:40
  • Don't go with these solution you might loose your data. Actually, I figured out don't go for delete partition option if show partition list values in the partition table, cause if the list is there then it literally deletes the partition, so data will be lost even if answer says "It won't delete". There is a way to do extend partition size, check at the bottom there are other utilities which will help in extending your partition size smoothly. – piyushmandovra Mar 21 '17 at 17:29
  • I think this is best answer xfs_growfs / – Kishore Kumar May 17 '22 at 11:36
9

This will work for xfs file system just run this command

xfs_growfs /
xlecoustillier
  • 16,183
  • 14
  • 60
  • 85
Saurabh Chandra Patel
  • 12,712
  • 6
  • 88
  • 78
8

Once you modify the size of your EBS,

List the block devices

sudo lsblk

NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1     259:2    0  10G  0 disk
|-nvme0n1p1 259:3    0   1M  0 part
`-nvme0n1p2 259:4    0  10G  0 part /

Expand the partition

Suppose you want to extend the second partition mounted on /,

sudo growpart /dev/nvme0n1 2

If all your space is used up in the root volume and basically you're not able to access /tmp i.e. with error message Unable to growpart because no space left,

  1. temporarily mount a /tmp volume: sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
  2. unmount after the complete resize is done: sudo umount -l /tmp

Verify the new size

NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1     259:2    0  20G  0 disk
|-nvme0n1p1 259:3    0   1M  0 part
`-nvme0n1p2 259:4    0  10G  0 part /

Resize the file-system

For XFS (use the mount point as argument)

sudo xfs_growfs /

For EXT4 (use the partition name as argument)

sudo resize2fs /dev/nvme0n1p2

Chinmaya Pati
  • 357
  • 3
  • 7
7
  1. login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button

  2. growpart /dev/xvda 1

  3. resize2fs /dev/xvda1

This is a cut-to-the-chase version of Dmitry Shevkoplyas' answer. AWS documentation does not show the growpart command. This works ok for ubuntu AMI.

jperelli
  • 6,988
  • 5
  • 50
  • 85
7
  1. sudo growpart /dev/xvda 1
  2. sudo resize2fs /dev/xvda1

the above two commands saved my time for AWS ubuntu ec2 instances.

HD298
  • 151
  • 2
  • 6
5

Just in case if anyone here for GCP google cloud platform ,
Try this:

sudo growpart /dev/sdb 1
sudo resize2fs /dev/sdb1
yunus
  • 2,445
  • 1
  • 14
  • 12
  • Do you know why it happens if it doesnt grow? I executed this on 2 machines with each a secundary disk (found a post with this information), 1 of the disks grew but the other didnt. – abr May 26 '21 at 20:46
3

So in Case anyone had the issue where they ran into this issue with 100% use , and no space to even run growpart command (because it creates a file in /tmp)

Here is a command that i found that bypasses even while the EBS volume is being used , and also if you have no space left on your ec2 , and you are at 100%

/sbin/parted ---pretend-input-tty /dev/xvda resizepart 1 yes 100%

see this site here:

https://www.elastic.co/blog/autoresize-ebs-root-volume-on-aws-amis

Bot
  • 134
  • 6
  • This command should be followed by `sudo resize2fs /dev/xvda1` to update `/etc/fstab`, only after that `df -h` will show the grown disk space – karmendra Mar 01 '19 at 14:16
3

I faced similar issue for Ubuntu system in EC2

Firstly checked the filesystem

lsblk

Then after increasing volume size from console, I ran below commands

sudo growpart /dev/nvme0n1 1

This will show change in lsblk command

Then I could then extend the FS with

sudo resize2fs /dev/nvme0n1p1

Finally verify it with df -h command, it will work

Rajeev
  • 119
  • 4
2

Did you make a partition on this volume? If you did, you will need to grow the partition first.

chantheman
  • 5,256
  • 4
  • 23
  • 34
  • no I did not. Should I?How do I do that? Remember this new volume I have attached is supposed to have all the previous data because it is a snapshot of the original volume – Wilman Arambillete Jun 13 '12 at 16:00
  • No. But I have gotten that error if there was a partition attached. Go and double check you made the volume the correct size, and double check you mounted the new volume. – chantheman Jun 13 '12 at 17:06
  • Also, you don't have to stop the instance to do this. It is safe to if you have writes on that volume, but you can snapshot it with the instance running. – chantheman Jun 13 '12 at 17:06
2

Thanks, @Dimitry, it worked like a charm with a small change to match my file system.

source: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#recognize-expanded-volume-linux

Then use the following command, substituting the mount point of the filesystem (XFS file systems must be mounted to resize them):

[ec2-user ~]$ sudo xfs_growfs -d /mnt
meta-data=/dev/xvdf              isize=256    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 26214400

Note If you receive an xfsctl failed: Cannot allocate memory error, you may need to update the Linux kernel on your instance. For more information, refer to your specific operating system documentation. If you receive a The filesystem is already nnnnnnn blocks long. Nothing to do! error, see Expanding a Linux Partition.

user2125117
  • 749
  • 6
  • 5
1

Bootable flag (a) didn't worked in my case (EC2, centos6.5), so i had to re-create volume from snapshot. After repeating all steps EXCEPT bootable flag - everything worked flawlessly so i was able to resize2fs after. Thank you!

sandr
  • 11
  • 1
1

As for my EC2, it growpart gives me:

growpart /dev/xvda 1
FAILED: /dev/xvda: does not exist

So I just used this after resize on AWS management website and it worked for me:

resize2fs /dev/xvda1
Alex
  • 381
  • 4
  • 14
0

Don't have enough rep to comment above; but also note per the comments above that you can corrupt your instance if you start at 1; if you hit 'u' after starting fdisk before you list your partitions with 'p' this will infact give you the correct start number so you don't corrupt your volumes. For centos 6.5 AMI, also as mentioned above 2048 was correct for me.

Reece
  • 641
  • 7
  • 18
0

Put space between name and number, ex:

sudo growpart /dev/xvda 1

Note that there is a space between the device name and the partition number.

To extend the partition on each volume, use the following growpart commands. Note that there is a space between the device name and the partition number.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

mwafi
  • 3,946
  • 8
  • 56
  • 83
0

In case EC2 Linux disk size does not match attached volume size of device...

I had attached two devices

/dev/sda1  8GB
/dev/xvda 20GB

but lsblk kept insisting

xvda     202:0    0     8G  0 disk

Then it dawned on me that sda1 could be shadowing the xvda device, and I renamed it to

/dev/sda1  8GB
/dev/xvde 20GB

and voila lsblk

xvda     202:0    0     8G  0 disk
xvde     202:64   0    20G  0 disk

This behaviour may depend on your OS/kernel...

Wolfgang Kuehn
  • 12,206
  • 2
  • 33
  • 46
0

I modified the existing volume from 8 to 20 for the above issue. After that

df -h
lsblk


sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp

sudo growpart /dev/xvda 1

based on the OS or file system, after that

sudo resize2fs /dev/xvda 1

sudo umount /tmp
0

Just one detail. You don't need to wait till the "Optimizing" volume state is completed.

As mentioned here:

Before you begin (Extend a Linux file system). Confirm that the volume modification succeeded and that it is in the optimizing or completed state. For more information, see Monitor the progress of volume modifications. (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html)

And here:

Size changes usually take a few seconds to complete and take effect after the volume has transitioned to the Optimizing state. (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-modifications.html)

You also don't need to interrupt your instance to resize it. You can do it on the fly. But then you do need to run the growpart command as mentioned in other answers, before continuing with other resize commands.

bvdb
  • 22,839
  • 10
  • 110
  • 123