Re: Impact of checkpoint_segments under continual load conditions

From: Christopher Petrilli <petrilli(at)gmail(dot)com>
To: PFC <lists(at)boutiquenumerique(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Vivek Khera <vivek(at)khera(dot)org>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Impact of checkpoint_segments under continual load conditions
Date: 2005-07-19 16:34:19
Message-ID: 59d991c405071909341a10143@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 7/19/05, PFC <lists(at)boutiquenumerique(dot)com> wrote:
>
>
> > I think PFC's question was not directed towards modeling your
> > application, but about helping us understand what is going wrong
> > (so we can fix it).
>
> Exactly, I was wondering if this delay would allow things to get flushed,
> for instance, which would give information about the problem (if giving it
> a few minutes of rest resumed normal operation, it would mean that some
> buffer somewhere is getting filled faster than it can be flushed).
>
> So, go ahead with a few minutes even if it's unrealistic, that is not the
> point, you have to tweak it in various possible manners to understand the
> causes.

Totally understand, and appologize if I sounded dismissive. I
definately appreciate the insight and input.

> And instead of a pause, why not just set the duration of your test to
> 6000 iterations and run it two times without dropping the test table ?

This I can do. I'll probably set it for 5,000 for the first, and
then start the second. In non-benchmark experience, however, this
didn't seem to make much difference.

> I'm going into wild guesses, but first you should want to know if the
> problem is because the table is big, or if it's something else. So you run
> the complete test, stopping a bit after it starts to make a mess, then
> instead of dumping the table and restarting the test anew, you leave it as
> it is, do something, then run a new test, but on this table which already
> has data.
>
> 'something' could be one of those :
> disconnect, reconnect (well you'll have to do that if you run the test
> twice anyway)
> just wait
> restart postgres
> unmount and remount the volume with the logs/data on it
> reboot the machine
> analyze
> vacuum
> vacuum analyze
> cluster
> vacuum full
> reindex
> defrag your files on disk (stopping postgres and copying the database
> from your disk to anotherone and back will do)
> or even dump'n'reload the whole database
>
> I think useful information can be extracted that way. If one of these
> fixes your problem it'l give hints.
>

This could take a while :-)

Chris
--
| Christopher Petrilli
| petrilli(at)gmail(dot)com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2005-07-19 16:42:08 Re: Impact of checkpoint_segments under continual load conditions
Previous Message Christopher Petrilli 2005-07-19 16:30:34 Re: Impact of checkpoint_segments under continual load conditions