0

I purchased a virtual server that had 8 vCPUs, 16G memory, and a 500G ssd volume (which is backed by ceph rbd). Then I used fio to test the server's IO performance. To better understanding the fio results, during the test, I also used blktrace to capture the block layer IO trace.

  1. seqwriete

    fio --filename=/dev/vdc --ioengine=libaio --bs=4k --rw=write --size=8G --iodepth=64 --numjobs=8 --direct=1 --runtime=960 --name=seqwrite --group_reporting

fio output for seqwrite parsed blktrace output for seqwrite

  1. randread

    fio --filename=/dev/vdc --ioengine=libaio --bs=4k --rw=randread --size=8G --iodepth=64 --numjobs=8 --direct=1 --runtime=960 --name=randread --group_reporting

fio output for randread parsed blktrace output for randread

What I am trying to understand is the difference at block layer between seqwrite and randread.

  1. why does randread have large portion of I2D but seqwrite does not?
  2. why doesn't randread have Q2M?
Ning
  • 1
  • 2

1 Answers1

0

(Note this isn't really a programming question so Stackoverflow is the wrong place to ask this... Maybe Super User or Serverfault would be a better choice?)

why does randread have large portion of I2D but seqwrite does not?

Did you realise each of your 8 numjobs is overwriting the same area as the other numjobs? This means the block layer may be able to throw subsequent requests away if an overwrite for the same region comes in close enough (which is somewhat likely in the sequential case)...

why doesn't randread have Q2M?

It's hard to back merge random I/O with existing queued I/O as it's often discontiguous!

Anon
  • 6,306
  • 2
  • 38
  • 56