Configure and use LVM thin provisioning.
Thin volumes are useful for overprovisioning especially for containers.
Preparations
Install LVM utilities.
$ sudo apt install lvm2
Initialize physical volume.
$ sudo pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
Create a volume group.
$ sudo vgcreate vg01 /dev/sdb
Volume group "vg01" successfully created
Create thin pool
Create thin pool.
$ sudo lvcreate --thin --size 5G --chunksize 256K --poolmetadatasize 1G vg01/thin_pool
Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. Logical volume "thin_pool" created.
Display information about logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-a-tz-- 5.00g 0.00 1.57 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g
Create thin volumes
Create thin volumes.
$ sudo lvcreate --thinpool vg01/thin_pool --name volume_cerberus --virtualsize 3G
Logical volume "volume_cerberus" created.
$ sudo lvcreate --thinpool vg01/thin_pool --name volume_kraken --virtualsize 3G
WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool vg01/thin_pool (5.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "volume_kraken" created.
$ sudo lvcreate --thinpool vg01/thin_pool --name volume_hydra --virtualsize 3G
WARNING: Sum of all thin volume sizes (9.00 GiB) exceeds the size of thin pool vg01/thin_pool (5.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "volume_hydra" created
Create an ext4 filesystem.
$ sudo mkfs.ext4 /dev/mapper/vg01-volume_cerberus
mke2fs 1.46.1 (9-Feb-2021) Discarding device blocks: done Creating filesystem with 786432 4k blocks and 196608 inodes Filesystem UUID: 075ebaeb-36ef-4532-9826-7b926d3a2186 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done
$ sudo mkfs.ext4 /dev/mapper/vg01-volume_kraken
mke2fs 1.46.1 (9-Feb-2021) Discarding device blocks: done Creating filesystem with 786432 4k blocks and 196608 inodes Filesystem UUID: d6fa7e01-5262-43d7-9718-d10a0e109cff Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done
$ sudo mkfs.ext4 /dev/mapper/vg01-volume_hydra
mke2fs 1.46.1 (9-Feb-2021) Discarding device blocks: done Creating filesystem with 786432 4k blocks and 196608 inodes Filesystem UUID: fc9f511f-4310-4318-9b97-07b594e32dfb Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done
Create mount directories.
$ sudo mkdir /opt/{cerberus,kraken,hydra}
Mount thin volumes.
$ sudo mount /dev/mapper/vg01-volume_cerberus /opt/cerberus/
$ sudo mount /dev/mapper/vg01-volume_kraken /opt/kraken/
$ sudo mount /dev/mapper/vg01-volume_hydra /opt/hydra/
Inspect disk space usage.
$ df -h /dev/mapper/vg01-volume_*
Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-volume_cerberus 2.9G 24K 2.8G 1% /opt/cerberus /dev/mapper/vg01-volume_hydra 2.9G 24K 2.8G 1% /opt/hydra /dev/mapper/vg01-volume_kraken 2.9G 24K 2.8G 1% /opt/kraken
Trigger out of space situation
Create files to trigger out of space situation.
$ sudo dd if=/dev/zero of=/opt/cerberus/file bs=1M count=2048
2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 2.2239 s, 966 MB/s
$ sudo dd if=/dev/zero of=/opt/kraken/file bs=1M count=2048
2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.01049 s, 713 MB/s
$ sudo dd if=/dev/zero of=/opt/hydra/file bs=1M count=2048
^C Killed
Inspect disk space usage.
$ df -h /dev/mapper/vg01-volume_*
Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-volume_cerberus 2.9G 2.1G 734M 74% /opt/cerberus /dev/mapper/vg01-volume_kraken 2.9G 2.1G 734M 74% /opt/kraken /dev/mapper/vg01-volume_hydra 2.9G 745M 2.0G 27% /opt/hydra
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotzD- 5.00g 100.00 1.63 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 25.80 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 70.43
Reclaim used space
Delete created files.
$ sudo rm /opt/{cerberus,kraken,hydra}/file
Inspect disk space usage.
$ df -h /dev/mapper/vg01-volume_*
Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-volume_cerberus 2.9G 24K 2.8G 1% /opt/cerberus /dev/mapper/vg01-volume_hydra 2.9G 24K 2.8G 1% /opt/hydra /dev/mapper/vg01-volume_kraken 2.9G 24K 2.8G 1% /opt/kraken
Notice, volume usage does not change.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotzD- 5.00g 100.00 1.63 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 25.80 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 70.43 vagrant@bullseye:~$
Discard unused blocks on mounted filesystems.
$ sudo fstrim /opt/cerberus
$ sudo fstrim /opt/kraken
$ sudo fstrim /opt/hydra
Inspect volume usage.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 5.00g 6.76 1.58 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 3.76 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 3.75 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 3.76
Create snaphot
Create sample file.
$ sudo dd if=/dev/zero of=/opt/cerberus/file bs=1M count=2048
2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 2.7677 s, 776 MB/s
Create a snapshot.
$ sudo lvcreate --name cerberus_snapshot --snapshot vg01/volume_cerberus
WARNING: Sum of all thin volume sizes (12.00 GiB) exceeds the size of thin pool vg01/thin_pool (5.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "cerberus_snapshot" created.
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert cerberus_snapshot vg01 Vwi---tz-k 3.00g thin_pool volume_cerberus [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 5.00g 46.77 1.60 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 3.75 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 3.76
Delete created file.
$ sudo rm /opt/cerberus/file
Discard unused blocks on mounted filesystems.
$ sudo fstrim /opt/cerberus
Activate the snapshot.
$ sudo lvchange --activate y --ignoreactivationskip vg01/cerberus_snapshot
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert cerberus_snapshot vg01 Vwi-a-tz-k 3.00g thin_pool volume_cerberus 70.43 [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 5.00g 46.79 1.60 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 3.75 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 3.76
Create mount point.
$ sudo mkdir /opt/cerberus_snapshot
Mount snapshot.
$ sudo mount /dev/mapper/vg01-cerberus_snapshot /opt/cerberus_snapshot/
Inspect snapshot.
$ ls /opt/cerberus_snapshot/
file lost+found
Autoextend thin pool
Inspect activation/thin_pool_autoextend_threshold
option.
$ sudo lvmconfig ---withcomment activation/thin_pool_autoextend_threshold
# Configuration option activation/thin_pool_autoextend_threshold. # Auto-extend a thin pool when its usage exceeds this percent. # Setting this to 100 disables automatic extension. # The minimum value is 50 (a smaller value is treated as 50.) # Also see thin_pool_autoextend_percent. # Automatic extension requires dmeventd to be monitoring the LV. # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # thin_pool_autoextend_threshold = 70 thin_pool_autoextend_threshold=100
Inspect activation/thin_pool_autoextend_percent
option.
$ sudo lvmconfig ---withcomment activation/thin_pool_autoextend_percent
# Configuration option activation/thin_pool_autoextend_percent. # Auto-extending a thin pool adds this percent extra space. # The amount of additional space added to a thin pool is this # percent of its current size. # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # thin_pool_autoextend_percent = 20 thin_pool_autoextend_percent=20
Create a custom profile.
$ sudo lvmconfig --file /etc/lvm/profile/autoextend.profile --withcomments --config "activation/thin_pool_autoextend_threshold=60 activation/thin_pool_autoextend_percent=20"
Inspect created profile.
$ sudo cat /etc/lvm/profile/autoextend.profile
# Configuration section activation. activation { # Configuration option activation/thin_pool_autoextend_threshold. # Auto-extend a thin pool when its usage exceeds this percent. # Setting this to 100 disables automatic extension. # The minimum value is 50 (a smaller value is treated as 50.) # Also see thin_pool_autoextend_percent. # Automatic extension requires dmeventd to be monitoring the LV. # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # thin_pool_autoextend_threshold = 70 thin_pool_autoextend_threshold=60 # Configuration option activation/thin_pool_autoextend_percent. # Auto-extending a thin pool adds this percent extra space. # The amount of additional space added to a thin pool is this # percent of its current size. # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # thin_pool_autoextend_percent = 20 thin_pool_autoextend_percent=20 }
Apply profile to existing thin pool.
$ sudo lvchange --metadataprofile autoextend vg01/thin_pool
Logical volume vg01/thin_pool changed.
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 5.00g 6.77 1.58 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 3.76 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 3.76 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 3.76
Create a file.
$ sudo dd if=/dev/zero of=/opt/cerberus/file bs=1M count=2048
2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 2.76261 s, 777 MB/s
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 5.00g 46.18 1.60 [thin_pool_tdata] vg01 Twi-ao---- 5.00g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 69.45 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 3.76 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 3.76
Create a file.
$ sudo dd if=/dev/zero of=/opt/kraken/file bs=1M count=2048
2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.27238 s, 656 MB/s
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 7.62g 56.68 1.63 [thin_pool_tdata] vg01 Twi-ao---- 7.62g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 3.76 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 69.86
Create a file.
$ sudo dd if=/dev/zero of=/opt/hydra/file bs=1M count=2048
2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 2.94788 s, 728 MB/s
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 10.98g 57.71 1.65 [thin_pool_tdata] vg01 Twi-ao---- 10.98g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 70.43
Extend thin pool or volume
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 1.00g thin_pool vg01 twi-aotz-- 10.98g 57.71 1.65 [thin_pool_tdata] vg01 Twi-ao---- 10.98g [thin_pool_tmeta] vg01 ewi-ao---- 1.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 70.43
Extend pool metadata.
lvextend --poolmetadatasize +1G vg01/thin_pool
Size of logical volume vg01/thin_pool_tmeta changed from 1.00 GiB (256 extents) to 2.00 GiB (512 extents). Logical volume vg01/thin_pool_tmeta successfully resized.
Inspect logical volumes.
vagrant@bullseye:~$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 2.00g thin_pool vg01 twi-aotz-- 10.98g 57.71 0.83 [thin_pool_tdata] vg01 Twi-ao---- 10.98g [thin_pool_tmeta] vg01 ewi-ao---- 2.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 70.43
Extend thin pool.
$ sudo lvextend --size +1G vg01/thin_pool
Size of logical volume vg01/thin_pool_tdata changed from 10.98 GiB (2812 extents) to 11.98 GiB (3068 extents). Logical volume vg01/thin_pool_tdata successfully resized.
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 2.00g thin_pool vg01 twi-aotz-- 11.98g 52.89 0.83 [thin_pool_tdata] vg01 Twi-ao---- 11.98g [thin_pool_tmeta] vg01 ewi-ao---- 2.00g volume_cerberus vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 70.43
Extend thin volume.
$ sudo lvextend --size +1G vg01/volume_cerberus
Size of logical volume vg01/volume_cerberus changed from 3.00 GiB (768 extents) to 4.00 GiB (1024 extents). Logical volume vg01/volume_cerberus successfully resized.
Inspect logical volumes.
$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] vg01 ewi------- 2.00g thin_pool vg01 twi-aotz-- 11.98g 52.89 0.83 [thin_pool_tdata] vg01 Twi-ao---- 11.98g [thin_pool_tmeta] vg01 ewi-ao---- 2.00g volume_cerberus vg01 Vwi-aotz-- 4.00g thin_pool 52.83 volume_hydra vg01 Vwi-aotz-- 3.00g thin_pool 70.43 volume_kraken vg01 Vwi-aotz-- 3.00g thin_pool 70.43
Extend filesystem on volume.
$ sudo resize2fs /dev/mapper/vg01-volume_cerberus
resize2fs 1.46.1 (9-Feb-2021) Filesystem at /dev/mapper/vg01-volume_cerberus is mounted on /opt/cerberus; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/vg01-volume_cerberus is now 1048576 (4k) blocks long.
Inspect used space.
$ df -h /dev/mapper/vg01-volume_cerberus
Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-volume_cerberus 3.9G 2.1G 1.7G 55% /opt/cerberu
Additional notes
Read lvm
, lvmconfig
,lvmthin
manual pages.
Update
Thanks to Walter Dvorak for an update on this topic.
You can avoid the use of the dedicated fstrim command to free up unused blocks after the deletion of some data on the LVM-thin volumes simply by mount the thin volume with the option "discard": $ sudo mount -o discard /dev/mapper/vg01-volume_cerberus /opt/cerberus/ The option "discard" works also perfect if the physical device is a HDD (not a SSD). In this case the option "discard" is related to the specific features of thin LVM volume. it use just the same mechanism as for SSD.