From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Chris Travers <chris(dot)travers(at)adjust(dot)com> |
Cc: | PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Anyone have experience benchmarking very high effective_io_concurrency on NVME's? |
Date: | 2017-10-31 15:48:39 |
Message-ID: | CAM-w4HMZpQ0CgZP=6zFkSo1LGBn2k3xviRJod2ksGgu1-rsWbQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 31 October 2017 at 07:05, Chris Travers <chris(dot)travers(at)adjust(dot)com> wrote:
> Hi;
>
> After Andres's excellent talk at PGConf we tried benchmarking
> effective_io_concurrency on some of our servers and found that those which
> have a number of NVME storage volumes could not fill the I/O queue even at
> the maximum setting (1000).
And was the system still i/o bound? If the cpu was 100% busy then
perhaps Postgres just can't keep up with the I/O system. It would
depend on workload though, if you start many very large sequential
scans you may be able to push the i/o system harder.
Keep in mind effective_io_concurrency only really affects bitmap index
scans (and to a small degree index scans). It works by issuing
posix_fadvise() calls for upcoming buffers one by one. That gets
multiple spindles active but it's not really going to scale to many
thousands of prefetches (and effective_io_concurrency of 1000 actually
means 7485 prefetches). At some point those i/o are going to start
completing before Postgres even has a chance to start processing the
data.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Vitaly Burovoy | 2017-10-31 15:52:52 | Re: Fix dumping pre-10 DBs by pg_dump10 if table "name" exists |
Previous Message | Peter Eisentraut | 2017-10-31 15:45:25 | postgres_fdw: Add support for INSERT OVERRIDING clause |