Re: Bgwriter LRU cleaning: we've been going at this all wrong

From: Jim Nasby <decibel(at)decibel(dot)org>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bgwriter LRU cleaning: we've been going at this all wrong
Date: 2007-06-29 13:13:11
Message-ID: 3852A1F4-459A-4FAF-8897-400EF02692D1@decibel.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Jun 28, 2007, at 7:55 AM, Greg Smith wrote:
> On Thu, 28 Jun 2007, ITAGAKI Takahiro wrote:
>> Do you need to increase shared_buffers in such case?
>
> If you have something going wild creating dirty buffers with a high
> usage count faster than they are being written to disk, increasing
> the size of the shared_buffers cache can just make the problem
> worse--now you have an ever bigger pile of dirty mess to shovel at
> checkpoint time. The existing background writers are particularly
> unsuited to helping out in this situation, I think the new planned
> implementation will be much better.

Is this still a serious issue with LDC? I share Greg Stark's concern
that we're going to end up wasting a lot of writes.

Perhaps part of the problem is that we're using a single count to
track buffer usage; perhaps we need separate counts for reads vs writes?
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Enke 2007-06-29 13:28:17 Re: self defined data type "with limit"?
Previous Message Bernd Helmle 2007-06-29 13:09:30 pg_dump and minor versions