How to add a SSD disk as cache to a given file system on LVM

Today a quick one: I want to add a fast SSD disk as accelerator to a given file system running on software RAID and LVM.

I run a hosted server that has two 3 TB hard drives that operate in software RAID 1 mode:

[root@iboernig-hosted ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
sdb 8:16 0 2.7T 0 disk 
├─sdb1 8:17 0 512M 0 part 
│ └─md0 9:0 0 511.4M 0 raid1 /boot
├─sdb2 8:18 0 2.7T 0 part 
│ └─md1 9:1 0 2.7T 0 raid1 
│ ├─vg0-root 253:0 0 100G 0 lvm /
│ ├─vg0-swap 253:1 0 20G 0 lvm [SWAP]
│ ├─vg0-home 253:3 0 200G 0 lvm /home
│ └─vg0-var_corig 253:8 0 1T 0 lvm 
│ └─vg0-var 253:2 0 1T 0 lvm /var
└─sdb3 8:19 0 1M 0 part 
sdc 8:32 0 2.7T 0 disk 
├─sdc1 8:33 0 512M 0 part 
│ └─md0 9:0 0 511.4M 0 raid1 /boot
├─sdc2 8:34 0 2.7T 0 part 
│ └─md1 9:1 0 2.7T 0 raid1 
│ ├─vg0-root 253:0 0 100G 0 lvm /
│ ├─vg0-swap 253:1 0 20G 0 lvm [SWAP]
│ ├─vg0-home 253:3 0 200G 0 lvm /home
│ └─vg0-var_corig 253:8 0 1T 0 lvm 
│ └─vg0-var 253:2 0 1T 0 lvm /var
└─sdc3 8:35 0 1M 0 part

As you can see the disks are called /dev/sdb and /dev/sdc respectively. And there is a lot of free space still available.

Additionally the sda disk is 240 GB SSD. Now I want use the SSD as caching device and use half of the capacity as persistent cache for the /var file system.

First of all I have to add the device to the volume group:

$ pvcreate /dev/sda

$ vgextend vg0 /dev/sda

For the next step I have to create two logical volumes, one for the cache data itself and a smaller one for the metadata and make sure that they are created using the /dev/sda physical volume:

$ lvcreate -L 100G -n cachedisk1 vg0 /dev/sda

$ lvcreate -L 4G -n metadisk1 vg0 /dev/sda

Now I create the cachepool and link the cache to the meta data:

$ lvconvert --type cache-pool /dev/vg0/cachedisk1 --poolmetadata /dev/vg0/metadisk1

At a last step I configure the cache for the /var file system:

$ lvconvert --type cache /dev/vg0/var --cachepool cachedisk1

Done:

[root@iboernig-hosted ~]# lvs -a
 LV G Attr LSize  Pool Origin Data% Meta% Move Log Cpy%Sync Convert
 [cachedisk1] vg0 Cwi---C--- 100.00g 0.14 0.16 0.00 
 [cachedisk1_cdata] vg0 Cwi-ao---- 100.00g 
 [cachedisk1_cmeta] vg0 ewi-ao---- 4.00g 
 cachedisk2 vg0 -wi-a----- 100.00g 
 home vg0 -wi-ao---- 200.00g 
 [lvol0_pmspare] vg0 ewi------- 4.00g 
 metadisk2 vg0 -wi-a----- 4.00g 
 root vg0 -wi-ao---- 100.00g 
 swap vg0 -wi-ao---- 20.00g 
 var vg0 Cwi-aoC--- 1.00t [cachedisk1] [var_corig] 0.14 0.16 0.00 
 [var_corig] vg0 owi-aoC--- 1.00t

That was easy! I can see the cache usage and now I can observe things getting faster!

For the record: Tested on Centos 7.4 and RHEL 7.4.