Re: fdatasync performance problem with large number of DB files

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Cc: Michael Brown <michael(dot)brown(at)discourse(dot)org>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: fdatasync performance problem with large number of DB files
Date: 2021-03-11 00:16:10
Message-ID: 2255953.1615421770@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Thomas Munro <thomas(dot)munro(at)gmail(dot)com> writes:
> Thinking about this some more, if you were to propose a patch like
> that syncfs() one but make it a configurable option, I'd personally be
> in favour of trying to squeeze it into v14. Others might object on
> commitfest procedural grounds, I dunno, but I think this is a real
> operational issue and that's a fairly simple and localised change.
> I've run into a couple of users who have just commented that recursive
> fsync() code out!

I'm a little skeptical about the "simple" part. At minimum, you'd
have to syncfs() each tablespace, since we have no easy way to tell
which of them are on different filesystems. (Although, if we're
presuming this is Linux-only, we might be able to tell with some
unportable check or other.)

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Thomas Munro 2021-03-11 00:17:38 Re: fdatasync performance problem with large number of DB files
Previous Message Thomas Munro 2021-03-10 23:30:56 Re: fdatasync performance problem with large number of DB files