Skip site navigation (1) Skip section navigation (2)

Re: An out of memory error when doing a vacuum full

From: Sean Shanny <shannyconsulting(at)earthlink(dot)net>
To: pgsql-general(at)postgresql(dot)org
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: An out of memory error when doing a vacuum full
Date: 2003-12-29 19:09:48
Message-ID: 3FF07BFC.8080703@earthlink.net (view raw or flat)
Thread:
Lists: pgsql-general
Tom,

I will lower the sort_mem and see what happens. :-)

I agree that we probably pushed the limits of a vacuum full with the 
size table we had and the large percentage of change in the table.  We 
did NOT run any vacuum jobs during the update process, that my have 
helped to allow the updates to use the reclaimed space in place.

We are going to continue to run into this issue as the table is only 
going to get larger and we are still making tweaks.  An example would be 
adding a new column to the f_pageviews table, we have done this several 
times as we discover new data that needs to be captured.  This of course 
entails an ALTER TABLE ADD COLUMN, then an ALTER TABLE ALTER COLUMN to 
set the default, then an update to set the 'historical' rows to the 
default value, and finally a SET NOT NULL on the column as none of our 
data is allowed to have a value of  null. This pretty much entails 
updating all the rows in the table.

We are thinking instead of doing an UPDATE it would be better to make a 
new temp table, run the code that contains the update logic but instead 
of updating the real table write the updated and non updated rows to the 
temp table , and then do a drop, rename?

Thanks again for your time.

--sean

p.s.

By the way I am unable to send mail to you directly.  I get these 
errors: (I assume you just don't want email from earthlink?)

 tgl(at)sss(dot)pgh(dot)pa(dot)us
    SMTP error from remote mailer after MAIL FROM:<shannyconsulting(at)earthlink(dot)net>:
    host sss.pgh.pa.us [192.204.191.242]: 550 5.0.0 If you would like to talk to me, find a more responsible ISP than earthlink




Tom Lane wrote:

>Sean Shanny <shannyconsulting(at)earthlink(dot)net> writes:
>  
>
>>sort_mem = 64000                # min 64, size in KB
>>    
>>
>
>You might want to lower that; a complex query could easily use several
>times sort_mem.  Whether this is the immediate source of your problem
>with the other query is hard to tell.
>
>  
>
>>vacuum_mem = 32767              # min 1024, size in KB
>>    
>>
>
>That seems all right, but I recollect now that it only applies to plain
>VACUUM not VACUUM FULL.  VACUUM FULL needs to keep track of *all* the
>free space in a table, and so it's certainly possible that vacuuming a
>huge table with many dead tuples could require lots of memory.  I can't
>recall anyone else ever complaining about VACUUM FULL running out of
>memory, though, so there may be some other contributing factor in your
>situation.  Too bad you reloaded the table --- it would be interesting
>to see if increasing your 512Mb datasize ulimit would have allowed the
>VACUUM FULL to complete.  (Not but what it would've taken forever :-()
>
>			regards, tom lane
>
>  
>


In response to

Responses

pgsql-general by date

Next:From: Sean ShannyDate: 2003-12-29 19:17:32
Subject: Re: Out of memory error when doing an update with IN clause
Previous:From: Tom LaneDate: 2003-12-29 19:04:04
Subject: Re: Out of memory error when doing an update with IN clause

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group