Re: Poor disk (virtio) Performance Inside KVM virt-machine vs host machine

From: Imre Samu <pella(dot)samu(at)gmail(dot)com>
To: Artem Tomyuk <admin(at)leboutique(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Poor disk (virtio) Performance Inside KVM virt-machine vs host machine
Date: 2016-04-26 14:32:07
Message-ID: CAJnEWw=phZN-hG-pTXf3THLfNYh5mzfSUcBxVviZnHDKts2ekA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

>I've noticed that there is a huge (more than ~3x slower) performance
difference between KVM guest and host machine.

I don't know that this is relevant or not , but there is an IBM research
paper (Published in 2014)
"IBM Research Report - An Updated Performance Comparison of Virtual
Machines and Linux Containers"
http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/
-> " As we would expect, Docker introduces no overhead compared to Linux,
but KVM delivers only half as many IOPS because each I/O operation must go
through QEMU. While the VM’s absolute performance is still quite high, it
uses more CPU cycles per I/O operation, leaving less CPU available for
application work. Figure 7 shows that KVM increases read latency by 2-3x, a
crucial metric for some real workloads."

Imre

2016-04-26 16:03 GMT+02:00 Artem Tomyuk <admin(at)leboutique(dot)com>:

> Hi All.
>
> I've noticed that there is a huge (more than ~3x slower) performance
> difference between KVM guest and host machine.
> Host machine:
> dell r720xd
> RAID10 with 12 SAS 15 k drives and RAID0 with 2*128 GB INTEL SSD drives
> in Dell CacheCade mode.
>
> *On the KVM guest:*
>
> /usr/pgsql-9.4/bin/pg_test_fsync -f test.sync
>
> 5 seconds per test
>
> O_DIRECT supported on this platform for open_datasync and open_sync.
>
>
> Compare file sync methods using one 8kB write:
>
> (in wal_sync_method preference order, except fdatasync
>
> is Linux's default)
>
> open_datasync 5190.279 ops/sec 193
> usecs/op
>
> fdatasync 4022.553 ops/sec 249
> usecs/op
>
> fsync 3069.069 ops/sec 326
> usecs/op
>
> fsync_writethrough n/a
>
> open_sync 4892.348 ops/sec 204
> usecs/op
>
>
> Compare file sync methods using two 8kB writes:
>
> (in wal_sync_method preference order, except fdatasync
>
> is Linux's default)
>
> open_datasync 2406.577 ops/sec 416
> usecs/op
>
> fdatasync 4309.413 ops/sec 232
> usecs/op
>
> fsync 3518.844 ops/sec 284
> usecs/op
>
> fsync_writethrough n/a
>
> open_sync 1159.604 ops/sec 862
> usecs/op
>
>
> Compare open_sync with different write sizes:
>
> (This is designed to compare the cost of writing 16kB
>
> in different write open_sync sizes.)
>
> 1 * 16kB open_sync write 3700.689 ops/sec 270
> usecs/op
>
> 2 * 8kB open_sync writes 2581.405 ops/sec 387
> usecs/op
>
> 4 * 4kB open_sync writes 1318.871 ops/sec 758
> usecs/op
>
> 8 * 2kB open_sync writes 698.640 ops/sec 1431
> usecs/op
>
> 16 * 1kB open_sync writes 262.506 ops/sec 3809
> usecs/op
>
>
> Test if fsync on non-write file descriptor is honored:
>
> (If the times are similar, fsync() can sync data written
>
> on a different descriptor.)
>
> write, fsync, close 3071.141 ops/sec 326
> usecs/op
>
> write, close, fsync 3303.946 ops/sec 303
> usecs/op
>
>
> Non-Sync'ed 8kB writes:
>
> write 251321.188 ops/sec 4
> usecs/op
>
>
> *On the host machine:*
>
> /usr/pgsql-9.4/bin/pg_test_fsync -f test.sync
>
> 5 seconds per test
>
> O_DIRECT supported on this platform for open_datasync and open_sync.
>
>
> Compare file sync methods using one 8kB write:
>
> (in wal_sync_method preference order, except fdatasync
>
> is Linux's default)
>
> open_datasync 11364.136 ops/sec 88
> usecs/op
>
> fdatasync 12352.160 ops/sec 81
> usecs/op
>
> fsync 9833.745 ops/sec 102
> usecs/op
>
> fsync_writethrough n/a
>
> open_sync 14938.531 ops/sec 67
> usecs/op
>
>
> Compare file sync methods using two 8kB writes:
>
> (in wal_sync_method preference order, except fdatasync
>
> is Linux's default)
>
> open_datasync 7703.471 ops/sec 130
> usecs/op
>
> fdatasync 11494.492 ops/sec 87
> usecs/op
>
> fsync 9029.837 ops/sec 111
> usecs/op
>
> fsync_writethrough n/a
>
> open_sync 6504.138 ops/sec 154
> usecs/op
>
>
> Compare open_sync with different write sizes:
>
> (This is designed to compare the cost of writing 16kB
>
> in different write open_sync sizes.)
>
> 1 * 16kB open_sync write 14113.912 ops/sec 71
> usecs/op
>
> 2 * 8kB open_sync writes 7843.234 ops/sec 127
> usecs/op
>
> 4 * 4kB open_sync writes 3995.702 ops/sec 250
> usecs/op
>
> 8 * 2kB open_sync writes 1788.979 ops/sec 559
> usecs/op
>
> 16 * 1kB open_sync writes 937.177 ops/sec 1067
> usecs/op
>
>
> Test if fsync on non-write file descriptor is honored:
>
> (If the times are similar, fsync() can sync data written
>
> on a different descriptor.)
>
> write, fsync, close 10144.280 ops/sec 99
> usecs/op
>
> write, close, fsync 8378.558 ops/sec 119
> usecs/op
>
>
> Non-Sync'ed 8kB writes:
>
> write 159176.122 ops/sec 6
> usecs/op
>
>
> The file system "inside" and "outside" the same - ext4 on LVM. Disk
> scheduler "inside" and "outside" set to "noop". Fstab options same to,
> setted to rw,noatime,nodiratime,barrier=0. OS on host and guest the same
> CentOS release 6.5 (Final).
>
> Guest volume options:
>
> Disk bus: Virtio
>
> Cache mode: none
>
> IO mode: native
>
>
> Any ideas?
>
>
>
>
>
>
>
>
>
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Michael Nolan 2016-04-26 15:21:15 Re: Poor disk (virtio) Performance Inside KVM virt-machine vs host machine
Previous Message Artem Tomyuk 2016-04-26 14:03:08 Poor disk (virtio) Performance Inside KVM virt-machine vs host machine