Skip site navigation (1) Skip section navigation (2)

Re: Slow concurrent update of same row in a given table

From: Rajesh Kumar Mallah <mallah(dot)rajesh(at)gmail(dot)com>
To: Gavin Sherry <swm(at)alcove(dot)com(dot)au>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow concurrent update of same row in a given table
Date: 2005-09-28 17:44:12
Message-ID: a97c7703050928104421786201@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On 9/28/05, Gavin Sherry <swm(at)alcove(dot)com(dot)au> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > Hi
> >
> > While doing some stress testing for updates in a small sized table
> > we found the following results. We are not too happy about the speed
> > of the updates particularly at high concurrency (10 clients).
> >
> > Initially we get 119 updates / sec but it drops to 10 updates/sec
> > as concurrency is increased.
> >
> > PostgreSQL: 8.0.3
> > -------------------------------
> > TABLE STRUCTURE: general.stress
> > -------------------------------
> > | dispatch_id      | integer                  | not null  |
> > | query_id         | integer                  |           |
> > | generated        | timestamp with time zone |           |
> > | unsubscribes     | integer                  |           |
> > | read_count       | integer                  |           |
> > | status           | character varying(10)    |           |
> > | bounce_tracking  | boolean                  |           |
> > | dispatch_hour    | integer                  |           |
> > | dispatch_date_id | integer                  |           |
> > +------------------+--------------------------+-----------+
> > Indexes:
> >     "stress_pkey" PRIMARY KEY, btree (dispatch_id)
> >
> > UPDATE STATEMENT:
> > update general.stress set read_count=read_count+1 where dispatch_id=114
>
> This means you are updating only one row, correct?

Correct.


>
> > Number of Copies | Update perl Sec
> >
> > 1  --> 119
> > 2  ---> 59
> > 3  --->  38
> > 4  ---> 28
> > 5 --> 22
> > 6 --> 19
> > 7 --> 16
> > 8 --> 14
> > 9 --> 11
> > 10 --> 11
> > 11 --> 10
>
> So, 11 instances result in 10 updated rows per second, database wide or
> per instance? If it is per instance, then 11 * 10 is close to the
> performance for one connection.


Sorry do not understand the difference between "database wide"
and "per instance"

>
> That being said, when you've got 10 connections fighting over one row, I
> wouldn't be surprised if you had bad performance.
>
> Also, at 119 updates a second, you're more than doubling the table's
> initial size (dead tuples) each second. How often are you vacuuming and
> are you using vacuum or vacuum full?


Yes I realize the obvious phenomenon now, (and the uselessness of the script)
 , we should not consider it a performance degradation.

I am having performance issue in my live database thats why i tried to
simulate the situation(may the the script was overstresser).

My original problem is  that i send 100 000s of emails carrying a
beacon for tracking readership every tuesday and on wednesday i see
lot of the said query in pg_stat_activity each of these query update
the SAME row that corresponds to the dispatch of last day and it is
then i face the performance problem.

I think i can only post further details next wednesday , please lemme
know how should i be dealing with the situation if each the updates takes
100times more time that normal update duration.

Best Regards
Mallah.


>
> Gavin
>

In response to

Responses

pgsql-performance by date

Next:From: Matthew NuzumDate: 2005-09-28 17:56:28
Subject: Re: Monitoring Postgresql performance
Previous:From: Dan HarrisDate: 2005-09-28 17:22:18
Subject: Re: Monitoring Postgresql performance

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group