concurrency performance degradation

From: Sheer El-Showk <sheer(at)saraf(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: concurrency performance degradation
Date: 2001-10-27 02:57:27
Message-ID: Pine.LNX.4.33.0110261535280.27026-100000@laudanum.saraf.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

I have an application that uses foreign keys rather heavily in postgres.
It supports multiple concurrent sessions with high transaction volumes.
Ideally the server should be able to have 20-30 connections open to the
database each partaking in similar operations. Each transaction may
involve 20+ writes and maybe 100 reads.

When run single threaded, I can perform on transaction per second (30 or
so writes, 100 or so reads, plus time spent int he application), but if I
increase concurrency to about 10 active threads all doing similar
operations (on the same set of tables) I gain very little from it. I
notice in the process listing that the postges processes are almost all in
"INSERT WAITING..." or "UPDATE WAITING..." except for one which is in a
"SELECT" of an "INSERT". My confusion here is that I thought postgres
used record-level locking so I should be able run multipel concurrent
inserts on a table without locking.

The reason I mention foreign keys above is that I imagine that foreign
keys might complicate locking issues and I use them a _lot_. I also
noticed that the database slows down significantly as it gets populated.
When I get to about 8000 transactions (~8000 records in my primary table)
it is much slower, and by 19k it takes up to 5-15 seconds to do an insert.
A "vacuum analyze" didn't help here.

I tried this using postgres 7.0.2 and postgres 7.2b1 without any
significant differences. Does anybody have some related metrics to give
me or some tuning suggestions.

While the server being used is certainly not high-end (single processor
IDE) I'm still somewhat suprised at the results considering the small data
set (~20k entries). My experience with Oracle running a very similar
application (albeit on much higher end machines) suggests that when the
indices are created properly insert time is not significanly affected by
the size of the data set (because the indices are b-trees).

I can provide a rate of increase of transaction time as well as a stack
trace of the relevant blocking method if any postgres developer is
interested. The transaction times seem to be increaseing linearly rather
than logrithmically (which I would imagine they would if all the indices
were b-tree and are the limiting factor).

Any information would be helpful here.

Thanks,
Sheer

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Mark kirkwood 2001-10-27 03:08:38 On Distributions In 7.2 (Longish)
Previous Message Dmitry G. Mastrukov Дмитрий Геннадьевич Мастрюков 2001-10-26 23:31:04 Re: GUID in postgres