Slow concurrent update of same row in a given table

From: Rajesh Kumar Mallah <mallah(dot)rajesh(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Slow concurrent update of same row in a given table
Date: 2005-09-28 12:27:36
Message-ID: a97c770305092805271ff5e83f@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi

While doing some stress testing for updates in a small sized table
we found the following results. We are not too happy about the speed
of the updates particularly at high concurrency (10 clients).

Initially we get 119 updates / sec but it drops to 10 updates/sec
as concurrency is increased.

PostgreSQL: 8.0.3
-------------------------------
TABLE STRUCTURE: general.stress
-------------------------------
| dispatch_id | integer | not null |
| query_id | integer | |
| generated | timestamp with time zone | |
| unsubscribes | integer | |
| read_count | integer | |
| status | character varying(10) | |
| bounce_tracking | boolean | |
| dispatch_hour | integer | |
| dispatch_date_id | integer | |
+------------------+--------------------------+-----------+
Indexes:
"stress_pkey" PRIMARY KEY, btree (dispatch_id)

UPDATE STATEMENT:
update general.stress set read_count=read_count+1 where dispatch_id=114
TOOL USED: Perl/DBI , with prepared statement handlers
CONCURRENCY METHOD: executing multiple copies of same program
from different shells (linux enviornment)
CLIENT SERVER LINK : 10/100 Mbits , LAN

CLIENT CODE: stress.pl
-------------------------------------------------------------------------
#!/opt/perl/bin/perl -I/usr/local/masonapache/lib/perl
################################################
#overview: update the table as fast as possible (while(1){})
#on every 100th commit , print the average update frequency
#of last 100 updates
##########################################
use strict;
use Time::HiRes qw(gettimeofday tv_interval);
use Utils;
my $dbh = &Utils::db_connect();
my $sth = $dbh -> prepare("update general.stress set
read_count=read_count+1 where dispatch_id=114");
my $cnt=0;
my $t0 = [ gettimeofday ];
while(1) {
$sth -> execute();
$dbh->commit();
$cnt++;
if ($cnt % 100 == 0)
{
my $t1 = [ gettimeofday ];
my $elapsed = tv_interval ( $t0 , $t1 );
$t0 = $t1;
printf "Rate: %d updates / sec\n" , 100.0/$elapsed ;
}
}
$sth->finish();
$dbh->disconnect();
--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------
RESULTS:
--------------------------------------------------------------------------------------

Number of Copies | Update perl Sec

1 --> 119
2 ---> 59
3 ---> 38
4 ---> 28
5 --> 22
6 --> 19
7 --> 16
8 --> 14
9 --> 11
10 --> 11
11 --> 10

-------------------------------------------------------------------------------------
Note that the table was vacuum analyzed during the tests
total number of records in table: 93
-------------------------------------------------------------------------------------

Regds
Rajesh Kumar Mallah.

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Gavin Sherry 2005-09-28 12:53:35 Re: Slow concurrent update of same row in a given table
Previous Message hubert depesz lubaczewski 2005-09-28 07:07:07 database bloat, but vacuums are done, and fsm seems to be setup ok