| From: | Shelby Cain <alyandon(at)yahoo(dot)com> |
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Memory usage during vacuum |
| Date: | 2004-03-25 17:08:31 |
| Message-ID: | 20040325170831.72443.qmail@web41605.mail.yahoo.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
I apologize for my original post being unclear. I'm
running "vacuum analyze" and seeing the behavior
mentioned. Does specifying the analyze option imply
"vacuum full"?
On a hunch I just ran analyze <really big table> and
the backend's memory usage soared up to 100+ megs. I
suspect that means it isn't the vacuum but the analyze
that is eating all my precious ram. :)
Any tips on minimizing the memory footprint during
analyze (ie: backing off the 300 setting that I'm
currently using) or is this just something I'll have
to live with?
Regards,
Shelby Cain
--- Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Don't use VACUUM FULL. The vacuum_mem setting only
> limits the space
> consumed by plain VACUUM --- VACUUM FULL needs to
> keep track of all the
> free space in the table, and will eat as much memory
> as it has to to do
> that.
>
> regards, tom lane
__________________________________
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2004-03-25 17:11:59 | Re: conversion_create.sql (Related to BUG#1072) |
| Previous Message | Tom Lane | 2004-03-25 16:59:06 | Re: bug in delete rule ? |