Re: can't handle large number of INSERT/UPDATEs

From: Andrew McMillan <andrew(at)catalyst(dot)net(dot)nz>
To: Anjan Dave <adave(at)vantage(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: can't handle large number of INSERT/UPDATEs
Date: 2004-10-26 20:50:50
Message-ID: 1098823850.6440.89.camel@lamb.mcmillan.net.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, 2004-10-25 at 16:53 -0400, Anjan Dave wrote:
> Hi,
>
>
>
> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in ‘wait’s for other queries, and then
> everything piles up, with the load average shooting up to 10+.

Hi,

We saw a similar problem here that was related to the locking that can
happen against referred tables for referential integrity.

In our case we had referred tables with very few rows (i.e. < 10) which
caused the insert and update on the large tables to be effectively
serialised due to the high contention on the referred tables.

We changed our app to implement those referential integrity checks
differently and performance was hugely boosted.

Regards,
Andrew.
-------------------------------------------------------------------------
Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington
WEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267
Chicken Little was right.
-------------------------------------------------------------------------

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Anjan Dave 2004-10-26 21:13:04 Re: can't handle large number of INSERT/UPDATEs
Previous Message Jaime Casanova 2004-10-26 20:19:06 Re: Sequential Scan with LIMIT