Skip site navigation (1) Skip section navigation (2)

Re: limit number of concurrent callers to a stored

From: Ron <rjpeace(at)earthlink(dot)net>
To: Alan Stange <stange(at)rentec(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: limit number of concurrent callers to a stored
Date: 2005-08-18 03:19:04
Message-ID: 6.2.3.4.0.20050817224934.05c09228@pop.earthlink.net (view raw or flat)
Thread:
Lists: pgsql-performance
At 09:40 PM 8/17/2005, Alan Stange wrote:

>is there a simple way to limit the number of concurrent callers to a 
>stored proc?
>
>The problem we have is about 50 clients come and perform the same 
>operation at nearly the same time.  Typically, this query takes a 
>few seconds to run, but in the case of this thundering herd the 
>query time drops to 70 seconds or much more.  The query can return 
>up to 15MB of data.

I'm assuming there is some significant write activity going on at 
some point as a result of the query, since MVCC should not care about 
concurrent read activity?

Is that "a few seconds each query" or "a few seconds total if we run 
50 queries sequentially but 70+ seconds per query if we try to run 50 
queries concurrently"?

A) If the former, "a few seconds" * 50 can easily be 70+ seconds, and 
things are what you should expect.  Getting higher performance in 
that situation means reducing per query times, which may or may not 
be easy.  Looking at the stored procedure code with an eye towards 
optimization would be a good place to start.

B) If the later, then table access contention is driving performance 
into the ground, and there are a few things you can try:
1= lock the table(s) under these circumstances so only one query of 
the 50 can be acting on it at a time.  If the table(s) is/are small 
enough to be made RAM resident, this may be a particularly low-cost, 
low-effort, reasonable solution.

2= put a queue into place and only let some small number n of queries 
run against the table(s) concurrently.  Adjust n until you get best 
performance.  There are a few ways this could be done.

3= Buy a SSD and put the table(s) in question on it.  IIRC, 3.5" 
format SSDs that can "drop in" replace HDs are available in up to 
147GB capacities.


>The machine is a dual opteron, 8 GB memory, lots of fiber channel 
>disk, Linux 2.6, etc.
>
>So, I'm thinking that a semaphore than will block more than N 
>clients from being in the core of the function at one time would be 
>a good thing.

This will only help in case "B" above.  If you go the "hard" route of 
using systems programming, you will have a lot of details that must 
be paid attention to correctly or Bad Things (tm) will 
happen.  Putting the semaphore in place is the tip of the iceberg.


Hope this helps,
Ron Peacetree




In response to

pgsql-performance by date

Next:From: Qingqing ZhouDate: 2005-08-18 08:56:41
Subject: Re: Performance pb vs SQLServer.
Previous:From: Christopher Kings-LynneDate: 2005-08-18 02:36:07
Subject: Re: limit number of concurrent callers to a stored proc?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group