Re: Limit of bgwriter_lru_maxpages of max. 1000?

From: Gerhard Wiesinger <lists(at)wiesinger(dot)com>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limit of bgwriter_lru_maxpages of max. 1000?
Date: 2009-10-02 19:16:51
Message-ID: alpine.LFD.2.00.0910021636260.25754@bbs.intern
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, 2 Oct 2009, Greg Smith wrote:

> On Sun, 27 Sep 2009, Gerhard Wiesinger wrote:
>
>> Lowering bgwriter_delay is possible, but I think overhead is too much and
>> still there is a limit of 800MB/s involved:
>
> Stuff written by the background writer turns into largely random I/O. 800MB/s
> of random writes is so large of a number it's only recently become remotely
> possible; a RAID0 of SSD devices might manage it. No hardware available
> until very recently had any hope of getting that sort of performance.
>
> In any case, I would wager you'll run into one of many other bottlenecks in
> PostgreSQL and/or currently available system/disk hardware long before the
> background writer limit gets important.
>

Of course, 800MB/s are a theoretical max. limit I could thought of. But
with SSDs this might be possible.

>> So in fact I think bgwriter_lru_maxpages should be limited to 100000 if
>> limited at all.
>
> The current limit is based on the assumption that people will set it to
> values way too high if allowed, to the point where it's counterproductive.
> That's exactly what people used to do with early background writer designs.
> I think you're wandering down the same road, where what it actually does and
> what you think it does are not the same thing at all. Much of the important
> disk I/O coming out of the database should be related to checkpoints, not the
> background writer, and there is no limit on that I/O.
>

In my experience flushing I/O as soon as possible is the best solution.
Think of the following scenario: You currently limit bgwriter at 4MB/s but
you would have about 10MB/s random I/O capacity (a normal low cost
system). So utilitzzation would be only 40% and you could write even more.
At checkpoint time you would get a spike which the I/O system couldn't
handle at all and performance goes down to nearly zero because of the I/O
spike (e.g. 500% of available I/O needed). IHMO such scenarios should be
avoided.

> If you think you've got a situation where the current limits are not
> sufficient, the path to argue that would start with showing what you're
> seeing in pg_stat_bgwriter. I can imagine some results from there on a
> system with a very high rate of I/O available that would suggest the current
> limits are too small. I've never come close to actually seeing such results
> in the real world though, and if you're not already monitoring those numbers
> on a real system I'd suggest you start there rather than presuming there's a
> design limitation here.
>

On an nearly idle database with sometimes some performance tests:

SELECT
buffers_checkpoint/buffers_clean AS checkpoint_spike,
ROUND(100.0*buffers_checkpoint/(buffers_checkpoint + buffers_clean +
buffers_backend),2) AS checkpoint_percentage,
ROUND(100.0*buffers_clean/(buffers_checkpoint + buffers_clean +
buffers_backend),2) AS pg_writer_percentage,
ROUND(100.0*buffers_backend/(buffers_checkpoint + buffers_clean +
buffers_backend),2) AS backend_percentage
FROM
pg_stat_bgwriter
;

checkpoint_spike | checkpoint_percentage | pg_writer_percentage | backend_percentage
------------------+-----------------------+----------------------+--------------------
31 | 90.58 | 2.92 | 6.50

So flushing happens typically at checkpoint time. In 6.5%of all blocks
were put by the backend on disk which says IHMO: pgwriter is to slow,
backend has to do the work now.

So I'd like to do some tests with new statistics. Any fast way to reset
statistics for all databases for pg_stat_pgwriter?

Thnx.

Ciao,
Gerhard

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Simon Riggs 2009-10-02 19:26:30 Re: Performance evaluation of PostgreSQL's historic releases
Previous Message Gerhard Wiesinger 2009-10-02 18:57:37 Re: PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans