Re: Postgres backend using huge amounts of ram

From: Richard Huxton <dev(at)archonet(dot)com>
To: Gary Doades <gpd(at)gpdnet(dot)co(dot)uk>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres backend using huge amounts of ram
Date: 2004-11-26 09:12:15
Message-ID: 41A6F36F.80805@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Gary Doades wrote:
> How much RAM can a single postgres backend use?
>
> I've just loaded a moderately sized dataset into postgres and was
> applying RI constraints to the tables (using pgadmin on windows). Part
> way though I noticed the (single) postgres backend had shot up to using
> 300+ MB of my RAM!

Oops - guess that's why they call it a Beta. My first guess was a queue
of pending foreign-key checks or triggers etc. but then you go on to say...

> Since I can't get an explain of what the alter table was doing I used this:
>
> select count(*) from booking_plan,reqt_dates where
> booking_plan.reqt_date_id = reqt_dates.reqt_date_id
>
> and sure enough this query caused the backend to use 300M RAM. The plan
> for this was:
[snip]
> I then analysed the database. Note, there are no indexes at this stage
> except the primary keys.
>
> the same query then gave:
[snip]

> This is the same set of hash joins, BUT the backend only used 30M of
> private RAM.

I'm guessing in the first case that the default estimate of 1000 rows in
a table means PG chooses to do the join in RAM. Once it knows there are
a lot of rows it can tell not to do so.

However, I thought PG was supposed to spill to disk when the memory
required exceeded config-file limits. If you could reproduce a simple
test case I'm sure someone would be interested in squashing this bug.

--
Richard Huxton
Archonet Ltd

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message David Parker 2004-11-26 17:13:32 time to stop tuning?
Previous Message Jerome Macaranas 2004-11-26 08:28:31 Re: [PERFORM] HELP speed up my Postgres