Re: Out of Memory - 8.2.4

From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Marko Kreen <markokr(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jeff Amiel <becauseimjeff(at)yahoo(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Out of Memory - 8.2.4
Date: 2007-08-30 13:25:37
Message-ID: 20070830132537.GD5872@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Marko Kreen escribió:
> On 8/29/07, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> > > I'm not having much luck really. I think the problem is that ANALYZE
> > > stores reltuples as the number of live tuples, so if you delete a big
> > > portion of a big table, then ANALYZE and then VACUUM, there's a huge
> > > misestimation and extra index cleanup passes happen, which is a bad
> > > thing.
> >
> > Yeah ... so just go with a constant estimate of say 200 deletable tuples
> > per page?
>
> Note that it's much better to err on the smaller values.
>
> Extra index pass is really no problem.

Humm, is it? If you have a really big table (say, a hundred million
tuples) and two indexes then you are not happy when vacuum must make two
passes over the indexes. It may mean vacuum taking five hours instead
of three with vacuum delay. Remember, you must scan each index
*completely* each time.

> VACUUM getting "Out of memory" may not sound like a big problem, but
> the scary thing is - the last VACUUM's memory request may succeed and
> that means following queries start failing and that is big problem.

Maybe what we should do is spill the TID list to disk instead. TODO for
8.4?

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Alvaro Herrera 2007-08-30 13:26:29 Re: Out of Memory - 8.2.4
Previous Message Tom Lane 2007-08-30 13:23:57 Re: Out of Memory - 8.2.4