Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects

From: "Dmitry Koterov" <dmitry(at)koterov(dot)ru>
To: "Greg Smith" <gsmith(at)gregsmith(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects
Date: 2009-01-06 11:23:16
Message-ID: d7df81620901060323v1cd89b9bm2714c0d522b987e2@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-performance

Thank you.

But why buffers_backend is so high? As I understood from your article,
buffers_backend shows the number of writes immediately caused by any write
operations, e.g. when an INSERT has to flush something on disk, because it
has no space left for a new data in shared buffers. I suppose these flushes
slow down operating greatly, and I realy see this: in my environment INSERT
is usually performed in 1-2 ms, but sometimes it is executed in 5-6 seconds
or even more (10 seconds), which touches statement_timeout barrier and fails
the whole transaction.

The main purpose is to minimize INSERT/UPDATE time or, at least, make it
more predictable.

Could you please give an advice how to achieve this?

On Mon, Dec 29, 2008 at 1:04 AM, Greg Smith <gsmith(at)gregsmith(dot)com> wrote:

> On Fri, 26 Dec 2008, Dmitry Koterov wrote:
>
> checkpoint_timeout = 1min
>>
>
> Your system is having a checkpoint every minute. You can't do that and
> expect the background writer to do anything useful. As shown in your stats,
> all the dirty buffers are getting written out by those constant checkpoints.
>
> What I am trying to achieve is that all writing operation are performed
>> asynchronously and mostly flushed to the disk before a CHECKPOINT occurred,
>> so CHECKPOINT is cheap thanks to bgwiter work.
>>
>
> The background writer only tries to write out things that haven't been
> accessed recently, because the tests we did suggested the duplicated writes
> from any other approach negated the benefits from writing them earlier. So
> it's not possible to get all the buffers clean before the checkpoint starts,
> the ones that have been recently used can't get written except during a
> checkpoint.
>
> What does work instead is to spread the checkpoint writes over a long
> period, such that they are an asynchronous trickle of smaller writes. For
> that to work, you need to set checkpoint_timeout to a fairly long period (at
> least the default of 5 minutes if not longer) and checkpoint_segments to
> something fairly large. You can know the segments are large enough when
> most of the checkpoints show up in the checkpoints_timed count.
>
> --
> * Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Emanuel Calvo Franco 2009-01-06 12:59:57 Re: Replication on windows
Previous Message Scott Marlowe 2009-01-06 09:29:00 Re: [HACKERS] ERROR: failed to find conversion function from "unknown" to text

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2009-01-06 12:25:29 Re: bad selectivity estimates for CASE
Previous Message Tom Lane 2009-01-06 04:40:43 Re: bad selectivity estimates for CASE