Re: Analyse without locking?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Richard Neill <rn214(at)cam(dot)ac(dot)uk>
Cc: PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Analyse without locking?
Date: 2009-11-28 19:21:06
Message-ID: 29630.1259436066@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Richard Neill <rn214(at)cam(dot)ac(dot)uk> writes:
> Now, I understand that increasing checkpoint_segments is generally a
> good thing (subject to some limit), but doesn't that just mean that
> instead of say a 1 second outage every minute, it's a 10 second outage
> every 10 minutes?

In recent PG versions you can spread the checkpoint I/O out over a
period of time, so it shouldn't be an "outage" at all, just background
load. Other things being equal, a longer checkpoint cycle is better
since it improves the odds of being able to coalesce multiple changes
to the same page into a single write. The limiting factor is your
threshold of pain on how much WAL-replay work would be needed to recover
after a crash.

> Is it possible (or even sensible) to do a manual vacuum analyze with
> nice/ionice?

There's no support for that in PG. You could try manually renice'ing
the backend that's running your VACUUM but I'm not sure how well it
would work; there are a number of reasons why it might be
counterproductive. Fooling with the vacuum_cost_delay parameters is the
recommended way to make a vacuum run slower and use less of the machine.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Reydan Cankur 2009-11-28 21:00:42 Re: OpenMP in PostgreSQL-8.4.0
Previous Message Richard Neill 2009-11-28 17:57:11 Re: Analyse without locking?