Re: Background Writer and performances

From: Jan Wieck <JanWieck(at)Yahoo(dot)com>
To: Martijn van Oosterhout <kleptog(at)svana(dot)org>
Cc: DANTE Alexandra <Alexandra(dot)Dante(at)bull(dot)net>, pgsql-general(at)postgresql(dot)org
Subject: Re: Background Writer and performances
Date: 2006-07-18 10:24:01
Message-ID: 44BCB6C1.5090400@Yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 7/10/2006 9:49 AM, Martijn van Oosterhout wrote:
> On Mon, Jul 10, 2006 at 02:56:48PM +0200, DANTE Alexandra wrote:
>> **********************************
>> I would like to send charts to show you exactly what happens on the
>> server but, with the pictures, this e-mail is not posted on the mailing
>> list.
>> I can send charts to a personal e-mail adress if needed.
>> **********************************
>
> The best idea is to upload them to a website.
>
>> By comparing the charts, I can see that the checkpoints are less
>> expensive in term of Disk activity, IO/s and disk write throughput when
>> the parameters are set to the maximum values but I don?t not reach to
>> have constant disk IO/s, disk activity, disk write throughput before and
>> after a checkpoint. I was expecting to see more activity on the disks
>> during the bench (and not only a peak during the checkpoint) when the
>> parameters are set to the maximum values. Is it possible ?
>
> I have very little experience with the bgwriter, but on the whole, I
> don't think the bgwriter will change the total number of I/Os. Rather,
> it changes the timing to make them more consistant and the load more
> even.

The bgwriter can only "increase" the total amount of IO. What it does is
to write dirty pages out before a checkpoint or another backend (due to
eviction of the buffer) has to do it. This means that without the
bgwriter doing so, there would be a chance that a later update to the
same buffer would hit an already dirty buffer as opposed to a now clean
one. The upside of this increased write activity is that it happens all
the time, spread out between the checkpoints and that this doesn't allow
for large buffer cache configurations to accumulate tens of thousands of
dirty buffers.

The latter is a typical problem with OLTP type benchmarks that are
designed more closely to real world behaviour, like the TPC-C and TPC-W.
In those benchmarks, hundreds or thousands of simulated users basically
go through dialog steps of an application, and just like a real user
they don't fill in the form in milliseconds and slam ASAP onto the
submit button, they need a bit of time to "think" or "type". In that
scenario, the performance drop caused by a checkpoint will let more and
more "users" to finish their think/type phase and actually submit the
next transaction (dialog step), causing a larger and larger number of
concurrent DB requests and basically spiraling down the DB server.

The default settings are not sufficient for update intense applications.

I am not familiar with BenchmarkSQL, but 9 terminals with a 200
warehouse configuration doesn't sound like it is simulating real user
behaviour like outlined above.

Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Terry Lee Tucker 2006-07-18 11:18:44 Re: PostgreSQL on Embeded Systems
Previous Message Adem HUR 2006-07-18 09:50:51 PostgreSQL on Embeded Systems