performance - Basic set up of dm-cache - Ask Ubuntu


i found bcache suite me better, due discussion today came test dm-cache. didn't find great entry doc how so, thought document , share here - , make "searchable".

so how set dm-cache on ubuntu?

i starting on info on get out of nvme, man lvmcache resource.

i have (sorry not more disks around):

/dev/sda2 (931g slow) /dev//dev/nvme0n1 (372.6g fast) 

basic setup:

$ sudo pvcreate /dev/sda2   physical volume "/dev/sda2" created. $ sudo pvcreate /dev/nvme0n1   physical volume "/dev/nvme0n1" created. $ sudo vgcreate cache /dev/sda2 /dev/nvme0n1   volume group "cache" created $ sudo lvcreate -l 200g -n origin_device cache /dev/sda2   logical volume "origin_device" created $ sudo lvcreate -l 60g -n cache_block cache /dev/nvme0n1   logical volume "cache_block" created. $ sudo lvcreate -l 2g -n cache_meta cache /dev/nvme0n1   logical volume "cache_meta" created. $ sudo lvconvert --type cache-pool /dev/cache/cache_block --poolmetadata /dev/cache/cache_meta   warning: converting logical volume cache/cache_block , cache/cache_meta pool's data , metadata volumes.   destroy content of logical volume (filesystem etc.) want convert cache/cache_block , cache/cache_meta? [y/n]: y   converted cache/cache_block cache pool. $ sudo lvconvert --type cache /dev/cache/origin_device --cachepool /dev/cache/cache_block   logical volume cache/origin_device cached. 

after can use device "as usual". created non cached device reference basic test:

$ sudo lvcreate -l 200g -n origin_device_reference cache /dev/sda2   logical volume "origin_device_reference" created. $ sudo mkfs -t xfs /dev/cache/origin_device $ sudo mkfs -t xfs /dev/cache/origin_device_reference 

and mounted $ sudo mkdir /mnt/lv-xfs $ sudo mkdir /mnt/lv-xfs-cached $ sudo mount /dev/cache/origin_device_reference /mnt/lv-xfs $ sudo mount /dev/cache/origin_device /mnt/lv-xfs-cached

after setup looked this:

$ lsblk (filtered of other disks) name                              maj:min rm   size ro type mountpoint sda                                 8:0    0 931.5g  0 disk  |-sda2                              8:2    0   931g  0 part  | |-cache-origin_device_reference 252:4    0   200g  0 lvm  /mnt/lv-xfs | `-cache-origin_device_corig     252:3    0   200g  0 lvm   |   `-cache-origin_device         252:0    0   200g  0 lvm  /mnt/lv-xfs-cached nvme0n1                           259:0    0 372.6g  0 disk  |-cache-cache_block_cdata         252:1    0    60g  0 lvm   | `-cache-origin_device           252:0    0   200g  0 lvm  /mnt/lv-xfs-cached `-cache-cache_block_cmeta         252:2    0     2g  0 lvm     `-cache-origin_device           252:0    0   200g  0 lvm  /mnt/lv-xfs-cached  $ sudo dmsetup table cache-cache_block_cdata: 0 125829120 linear 259:0 2048 cache-origin_device_reference: 0 419430400 linear 8:2 423626752 cache-cache_block_cmeta: 0 4194304 linear 259:0 125831168 cache-origin_device: 0 419430400 cache 252:2 252:1 252:3 128 1 writethrough smq 0 cache-origin_device_corig: 0 419430400 linear 8:2 2048 

please aware dm-cache has evolved lot. there still many guides recommend tune cache "dmsetup message ...", part of old "mq" policy. see kernel doc. these days stochastic multiqueue (smq) default, considered superior , comes without tuning knobs. went far drop "mq" since kernel 4.6 , make alias smq policy.

so basic benchmarking 2 slow sync io sequential disk crawlers , 2 aio random hot spots (and not of crawlers fitting onto cache, hotspots do). there more details on results if want look. results better without cache, testcase in no way sophisticated enough check in detail.

uncached device:         rrqm/s   wrqm/s     r/s     w/s    rkb/s    wkb/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util sda               0.10     0.20  259.95  126.45  1840.00   599.32    12.63    65.96  170.43  126.56  260.62   2.59 100.00 dm-4              0.00     0.00  260.05  126.65  1840.00   599.32    12.62    65.99  170.37  126.53  260.39   2.59 100.00    read: io=1109.4mb, aggrb=1891kb/s   write: io=370212kb, aggrb=616kb/s   cached device:         rrqm/s   wrqm/s     r/s     w/s    rkb/s    wkb/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util sda               0.00     0.85    1.75  395.75   112.00  1679.40     9.01    33.18   83.44   82.97   83.44   2.52 100.00 nvme0n1         755.05     0.00 159339.95    0.25 873790.40    16.00    10.97    25.14    0.16    0.16    0.00   0.01 100.12 dm-0              0.00     0.00 156881.90  395.95 873903.00  1679.40    11.13    58.35    0.37    0.16   84.19   0.01 100.12 dm-1              0.00     0.00 160095.25    0.25 873791.00    16.00    10.92    25.41    0.16    0.16    0.00   0.01 100.10 dm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00 dm-3              0.00     0.00    1.75  396.60   112.00  1679.40     8.99    34.50   86.51   82.97   86.52   2.51 100.00    read: io=415116mb, aggrb=708356kb/s   write: io=1004.8mb, aggrb=1714kb/s 

this shall not become discussion on bcache, dm-cache, ... stated @ beginning prefer bcache not point. if otoh have recommendations dm-cache add please feel free use comment section.


Comments

Popular posts from this blog

download - Firefox cannot save files (most of the time), how to solve? - Super User

windows - "-2146893807 NTE_NOT_FOUND" when repair certificate store - Super User

sql server - "Configuration file does not exist", Event ID 274 - Super User