KISS 🇺🇦

Stop the war!

Stop the war in Ukraine! Fuck putin!

More information is at: https://war.ukraine.ua/.

There is a fund to support the Ukrainian Army: https://savelife.in.ua/en/donate/, and there is a special bank account that accepts funds in multiple currencies: https://bank.gov.ua/en/about/support-the-armed-forces. I donated to them. Please donate if you can!

Killer putin

Killer putin. Source: politico.eu.

Arrested putin

"It hasn't happened yet, but it will happen sooner or later. Beautiful photo, isn't it?" Source: twitter.

Live resizing an EBS volume on EC2?

| comments

I have a tiny EC2 instance for small experiments and needed to resize my the root EBS volume from 8G to 10G. There is a good official guide for this: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html, but it didn’t quite work for me. Below is what I tried and how I solved it.

growpart error NOCHANGE

The aws CLI tool is great and I was able to request the resize. The next step was to SSH into the VPS, lsblk, and resize the first partition. Here’s what I got:

1
2
$ sudo growpart /dev/nvme0n1 1
NOCHANGE: partition 1 could only be grown by -33 [fudge=2048]

All the posts that I found online mostly reiterated the steps from the guide, there was nothing about such an error anywhere. The closest I could find is this thread on the AWS forum https://forums.aws.amazon.com/thread.jspa?threadID=293496 saying:

When I used the same instance type (c5.2xlarge) and the same OS (Debian Linux 8 (jessie)) as you, I faced the same issue as you. However, when I repeated the same test using Amazon Linux AMI and I wasn’t able to reproduce the issue. Hence the issue is related to the OS and its configuration, rather than the C5 instance type per-se.

In my case, I’m using an Arch Linux image from https://www.uplinklabs.net/projects/arch-linux-on-ec2/. Based on the guides, I should be able to resize the volume without detaching it, but that didn’t happen here; maybe the Linux kernel isn’t built with some necessary patches. I finally noticed that lsblk still detected the old size:

1
2
3
4
$ lsblk
NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0   8G  0 disk
└─nvme0n1p1 259:1    0   8G  0 part /

And that’s obviously the cause of the error above.

growpart “failed to resize”

When I restarted the instance, the nvme0n1 device was 10 GB indeed whereas the first partition was still 8 GB. Here we go:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
$ sudo growpart -v -v -v /dev/nvme0n1 1
update-partition set to true
resizing 1 on /dev/nvme0n1 using resize_sfdisk_dos
running[sfd_list][erronly] sfdisk --list --unit=S /dev/nvme0n1
20971520 sectors of 512. total size=10737418240 bytes
running[sfd_dump][erronly] sfdisk --unit=S --dump /dev/nvme0n1
## sfdisk --unit=S --dump /dev/nvme0n1
label: dos
label-id: 0xba136a53
device: /dev/nvme0n1
unit: sectors
sector-size: 512

/dev/nvme0n1p1 : start=        2048, size=    16775168, type=83
padding 33 sectors for gpt secondary header
max_end=20971487 tot=20971520 pt_end=16777216 pt_start=2048 pt_size=16775168
attempt to resize /dev/nvme0n1 failed. sfdisk output below:
| Backup files:
|          MBR (offset     0, size   512): /tmp/growpart.VTEND5/orig.save-nvme0n1-0x00000000.bak
|
| Disk /dev/nvme0n1: 10 GiB, 10737418240 bytes, 20971520 sectors
| Disk model: Amazon Elastic Block Store
| Units: sectors of 1 * 512 = 512 bytes
| Sector size (logical/physical): 512 bytes / 512 bytes
| I/O size (minimum/optimal): 512 bytes / 512 bytes
| Disklabel type: dos
| Disk identifier: 0xba136a53
|
| Old situation:
|
| Device         Boot Start      End  Sectors Size Id Type
| /dev/nvme0n1p1       2048 16777215 16775168   8G 83 Linux
|
| >>> Script header accepted.
| >>> Script header accepted.
| >>> Script header accepted.
| >>> Script header accepted.
| >>> line 5: unsupported command
|
| New situation:
| Disklabel type: dos
| Disk identifier: 0xba136a53
|
| Device         Boot Start      End  Sectors Size Id Type
| /dev/nvme0n1p1       2048 16777215 16775168   8G 83 Linux
| Leaving.
|
FAILED: failed to resize
***** WARNING: Resize failed, attempting to revert ******
512+0 records in
512+0 records out
512 bytes copied, 0.000578528 s, 885 kB/s
***** Restore appears to have gone OK ****

Uh-oh, it didn’t work. growpart is mentioned in all the guides, I thought it was AWS-specific, but it’s actually a script from Debian as the mkinitcpio-growrootfs package mentions here: https://github.com/GregSutcliffe/aur-projects/tree/master/mkinitcpio-growrootfs. This package was installed in the AMI by default, and the README also mentions the grow hook for mkinitcpio, which should resize the root FS at boot. The hook was already setup, but it clearly didn’t work.

https://aur.archlinux.org/packages/mkinitcpio-growrootfs/#comment-493684 has an old comment that the script doesn’t work with the newer sfdisk, but the script from the fork didn’t work for me either. I couldn’t find much information about this. resize2fs failed too of course.

cfdisk to the rescue

I was going to resort to the hassle of starting a new VPS, attaching the volume and resizing it there, when I launched sudo cfdisk /dev/nvme0n1p1 — and it had an option to resize the partition to 10G, write the partition table, and done!

1
2
3
4
5
6
7
8
9
10
$ lsblk
NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0  10G  0 disk
└─nvme0n1p1 259:1    0  10G  0 part /

$ sudo resize2fs /dev/nvme0n1p1
resize2fs 1.45.6 (20-Mar-2020)
Filesystem at /dev/nvme0n1p1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/nvme0n1p1 is now 2621184 (4k) blocks long.

I tried to use fdisk later, but apparently it can’t resize partitions.

Comments