Re: Out of Memory - 8.2.4

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Marko Kreen <markokr(at)gmail(dot)com>, Jeff Amiel <becauseimjeff(at)yahoo(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Out of Memory - 8.2.4
Date: 2007-08-30 13:39:52
Message-ID: 26731.1188481192@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> Tom Lane escribi:
>> Yeah ... so just go with a constant estimate of say 200 deletable tuples
>> per page?

> How about we use a constant estimate using the average tuple width code?

I think that's overthinking the problem. The point here is mostly for
vacuum to not consume 512MB (or whatever you have maintenance_work_mem
set to) when vacuuming a ten-page table. I think that if we
significantly increase the risk of having to make multiple index passes
on medium-size tables, we'll not be doing anyone any favors.

If we went with allocating MaxHeapTuplesPerPage slots per page (292 in
CVS HEAD), 512MB would correspond to a bit over 300,000 pages, and you'd
get memory savings for anything less than that. But that's already a
2GB table --- do you want to risk multiple index passes because you were
chintzy with your memory allocation?

Ultimately, the answer for a DBA who sees "out of memory" a lot is to
reduce his maintenance_work_mem. I don't think VACUUM should be trying
to substitute for the DBA's judgment.

BTW, if an autovac worker gets an elog(ERROR) on one table, does it die
or continue on with the next table?

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Marko Kreen 2007-08-30 13:54:26 Re: Out of Memory - 8.2.4
Previous Message Alvaro Herrera 2007-08-30 13:31:17 Re: Removing pollution from log files