Skip site navigation (1) Skip section navigation (2)

Re: Getting an out of memory failure.... (long email)

From: Gaetano Mendola <mendola(at)bigfoot(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Getting an out of memory failure.... (long email)
Date: 2004-09-28 15:20:00
Message-ID: cjbvfd$mbe$1@floppy.pyrenet.fr (view raw or flat)
Thread:
Lists: pgsql-general
Sean Shanny wrote:
> Tom,
> 
> The Analyze did in fact fix the issue.  Thanks.
> 
> --sean

Given the fact that you are using pg_autovacuum, you have to consider
a few points:

1) Is out there a buggy version that will not analyze big tables.
2) The autovacuum fail in scenarios with big tables not eavy updated,
    inserted.

For the 1) I suggest to check in your logs and see how the total rows
in your table are displayed, the right version show you the rows number
as a float:
     [2004-09-28 17:10:47 CEST]   table name: empdb."public"."user_logs"
     [2004-09-28 17:10:47 CEST]      relid: 17220;   relisshared: 0
     [2004-09-28 17:10:47 CEST]      reltuples: 5579780.000000;  relpages: 69465
     [2004-09-28 17:10:47 CEST]      curr_analyze_count: 171003; curr_vacuum_count: 0
     [2004-09-28 17:10:47 CEST]      last_analyze_count: 165949; last_vacuum_count: 0
     [2004-09-28 17:10:47 CEST]      analyze_threshold: 4464024; vacuum_threshold: 2790190

for the point 2) I suggest you to "cron" analyze during the day.



Regards
Gaetano Mendola


In response to

pgsql-general by date

Next:From: Marco ColomboDate: 2004-09-28 15:27:03
Subject: Re: Null comparisons (was Re: checksum)
Previous:From: Tom LaneDate: 2004-09-28 15:17:47
Subject: Re: postgres v8b├ęta3 on AIX5.2

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group