Re: out of memory error

From: Vincent Dautremont <vincent(at)searidgetech(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory error
Date: 2012-05-22 18:04:39
Message-ID: CAA4Vp48tDW5vRA3t=WqEJzwYfDY1wmZ=YwDgPkaZ8ktMwUsUaQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Well,
I think that i'm using the database for pretty basic stuffs.
It's mostly used with stored procedures to update/ insert / select a row of
each table.
On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
have pseudo real-time data up to date.
I've got a total of 6 clients to the DB, they all access DB using stored
procedures
I would say that this is a light usage of the DB.

Then I have rubyrep 1.2.0 running to sync a backup of the DB.
it syncs 8 tables : 7 of them doesn't really change often and 1 is one of
the pseudo real-time data one.

So everything works fine, but I see postgres process memory usage growing,
it can go about 1GB to 1.5GB in about 5to 6 days, then I suppose it crashes.

I don't know a lot about PostgreSQL, what does the TopMemoryContext means.
I haven't been able to find much information about it.
Is it like a global memory stack for the DB ?

I've tried using the default postgres.conf file I usually run this modified
one with these values.
------------------------
"version";"PostgreSQL 8.3.15, compiled by Visual C++ build 1400"
"autovacuum";"on"
"autovacuum_analyze_threshold";"50"
"autovacuum_max_workers";"3"
"autovacuum_naptime";"1min"
"autovacuum_vacuum_threshold";"50"
"client_encoding";"UNICODE"
"deadlock_timeout";"5s"
"effective_cache_size";"2GB"
"lc_collate";"English_United States.1252"
"lc_ctype";"English_United States.1252"
"listen_addresses";"*"
"log_destination";"stderr"
"log_filename";"postgresql-%Y-%m-%d.log"
"log_line_prefix";"%t "
"log_rotation_age";"1d"
"log_truncate_on_rotation";"on"
"logging_collector";"on"
"max_connections";"100"
"max_fsm_pages";"204800"
"max_stack_depth";"2MB"
"port";"5432"
"server_encoding";"UTF8"
"shared_buffers";"512MB"
"shared_preload_libraries";"$libdir/plugins/plugin_debugger.dll"
"TimeZone";"Africa/Casablanca"
"track_counts";"on"
------------------------
Thanks,
Vincent

On Tue, May 22, 2012 at 1:33 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Vincent Dautremont <vincent(at)searidgetech(dot)com> writes:
> > after a few days, i'm seeing the following logs in a database
> (postgresql
> > 8.3.15 on Windows)
> > running with rubyrep 1.2.0 for syncing a few table small that have
> frequent
> > update / insert/ delete.
> > I don't understand it and I'd like to know what happens and why. How to
> get
> > rid of it.
>
> Well, this is a bit unusual:
>
> > TopMemoryContext: 197681816 total in 24131 blocks; 394104 free (24179
> > chunks); 197287712 used
>
> It's not normal to have so much stuff in TopMemoryContext. That could
> represent a leak, but it's also possible that that's just management
> overhead for all of these contexts:
>
> > SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
> > CachedPlan: 1024 total in 1 blocks; 248 free (0 chunks); 776 used
> > CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928
> used
> > SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used
>
> > .
> > .
> > . there it goes for about 80MB of log then it ends
> > .
> > .
> > SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
> > CachedPlan: 1024 total in 1 blocks; 248 free (0 chunks); 776 used
> > CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928
> used
>
> That's a *heck* of a lot of SPI plans. You haven't provided any
> information that would let anybody diagnose why you have so many,
> though. Tell us about your usage patterns, especially of PL-language
> functions.
>
> regards, tom lane
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2012-05-22 18:30:03 Re: out of memory error
Previous Message Tom Lane 2012-05-22 17:33:43 Re: out of memory error