| From: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> |
|---|---|
| To: | Andres Freund <andres(at)anarazel(dot)de> |
| Cc: | PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: checkpointer continuous flushing - V18 |
| Date: | 2016-03-10 22:38:38 |
| Message-ID: | alpine.DEB.2.10.1603102332220.18837@sto |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
[...]
> I had originally kept it with one context per tablespace after
> refactoring this, but found that it gave worse results in rate limited
> loads even over only two tablespaces. That's on SSDs though.
Might just mean that a smaller context size is better on SSD, and it could
still be better per table space.
> The number of pages still in writeback (i.e. for which sync_file_range
> has been issued, but which haven't finished running yet) at the end of
> the checkpoint matters for the latency hit incurred by the fsync()s from
> smgrsync(); at least by my measurement.
I'm not sure I've seen these performance... If you have hard evidence,
please feel free to share it.
--
Fabien.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Robert Haas | 2016-03-10 22:40:12 | Re: GinPageIs* don't actually return a boolean |
| Previous Message | Andres Freund | 2016-03-10 22:38:13 | Re: checkpointer continuous flushing - V18 |