Re: Postgres backend using huge amounts of ram

From: Gary Doades <gpd(at)gpdnet(dot)co(dot)uk>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres backend using huge amounts of ram
Date: 2004-11-26 19:42:50
Message-ID: 41A7873A.7000202@gpdnet.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Tom Lane wrote:
>
> It's also worth noting that work_mem is temporarily set to
> maintenance_work_mem, which you didn't tell us the value of:
>
It's left at the default. (16384).

This would be OK if that is all it used for this type of thing.

>
>
> My recollection is that hash join chooses hash table partitions partly
> on the basis of the estimated number of input rows. Since the estimate
> was way off, the actual table size got out of hand a bit :-(

A bit!!

The really worrying bit is that a normal (ish) query also exhibited the
same behaviour. I'm a bit worried that if the stats get a bit out of
date so that the estimate is off, as in this case, a few backends trying
to get this much RAM will see the server grind to a halt.

Is this a fixable bug? It seems a fairly high priority, makes the server
go away, type bug to me.

If you need the test data, I could zip the two tables up and send them
somewhere....

Thanks,
Gary.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Clifton Royston 2004-11-26 20:35:31 Re: [dspam-users] Postgres vs. MySQL
Previous Message Tom Lane 2004-11-26 19:25:34 Re: Postgres backend using huge amounts of ram