Skip site navigation (1) Skip section navigation (2)

Re: concurrency performance degradation

From: Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com>
To: Sheer El-Showk <sheer(at)saraf(dot)com>
Cc: Thomas Lockhart <lockhart(at)fourpalms(dot)org>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: concurrency performance degradation
Date: 2001-10-29 18:03:32
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-general
On Mon, 29 Oct 2001, Sheer El-Showk wrote:

> Unfortunately the appilcation I was using had so many queries it was
> difficult to EXPLAIN each of them.  I ended up measuring and logging
> execution time of each query in the application itself and then just
> sorting by times and going in and fixing indices on the slow queries.
> I am having another related issue though (and I'm trying to use EXPLAIN
> VERBOSE to figure it out).  I have a large table with a foreign key into a
> smaller table; I do very frequent inserts into the large table.  I think
> the query optmizer is electing to use a sequential scan on the foreign key
> in the smaller table since its small, but I think this is degrading
> performance as locking is worse on concurrent sequential scans (I'm not
> really sure of any of this stuff by the way).  Does anybody know how to
> check if this is the case (understanding the output of EXPLAIN VERBOSE
> is non-trivial) and how to force or suggest an index lookup rather than a
> sequential scan?

If you're inserting rows in the large table that reference the same row(s)
in the smaller table alot, your problem is probably not the plan of the
foreign key checks, but instead the row locks gotten on the matching rows.
If you can look at the ps output during these problems, and you see alot
of backends waiting, that's probably what it is.

AFAIK, If you do something like
T1: begin
T2: begin
T1: insert into big (ref) values (1);
T2: insert into big (ref) values (1);
right now the locking is such that T2 is going to wait until T1
commits or rolls back.  The lock gotten is stronger than is necessary
(it's the same as a SELECT ... FOR UPDATE which conflicts with itself,
where we really want a lock which conflicts with updates/deletes but
not itself).

In response to

pgsql-general by date

Next:From: gateleyDate: 2001-10-29 19:10:26
Subject: Re: Running vacuum on cron
Previous:From: Steve WolfeDate: 2001-10-29 18:03:29
Subject: Re: Running vacuum on cron

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group