From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | Jakub Wartak <Jakub(dot)Wartak(at)tomtom(dot)com> |
Cc: | "alvherre(at)2ndquadrant(dot)com" <alvherre(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Handing off SLRU fsyncs to the checkpointer |
Date: | 2020-08-28 21:26:42 |
Message-ID: | CA+hUKGLpAEXAhW+G0gc1XB=GFMa3dmnjCmVxRm8EivUu3fRULw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Aug 29, 2020 at 12:43 AM Jakub Wartak <Jakub(dot)Wartak(at)tomtom(dot)com> wrote:
> ... %CPU ... COMMAND
> ... 97.4 ... postgres: startup recovering 000000010000000000000089
So, what else is pushing this thing off CPU, anyway? For one thing, I
guess it might be stalling while reading the WAL itself, because (1)
we only read it 8KB at a time, relying on kernel read-ahead, which
typically defaults to 128KB I/Os unless you cranked it up, but for
example we know that's not enough to saturate a sequential scan on
NVME system, so maybe it hurts here too (2) we keep having to switch
segment files every 16MB. Increasing WAL segment size and kernel
readahead size presumably help with that, if indeed it is a problem,
but we could also experiment with a big POSIX_FADV_WILLNEED hint for a
future segment every time we cross a boundary, and also maybe increase
the size of our reads.
From | Date | Subject | |
---|---|---|---|
Next Message | Andy Fan | 2020-08-28 22:38:36 | Re: Improve planner cost estimations for alternative subplans |
Previous Message | Robert Haas | 2020-08-28 20:15:58 | Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to WARNING |