Re: Exponential behaviour with UPDATE collisions?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "David F(dot) Skoll" <dfs(at)roaringpenguin(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Exponential behaviour with UPDATE collisions?
Date: 2004-05-22 06:04:12
Message-ID: 6097.1085205852@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

"David F. Skoll" <dfs(at)roaringpenguin(dot)com> writes:
> I would expect that if N clients are all trying to update the same row,
> that the N updates would be serialized and run in O(N) time.

Possibly so; but your test case isn't a pure illustration of an O(N)
situation. You are measuring the time for the mainline to run 50
updates of a row (as fast as it can) while N other processes are
updating the same row as fast as they can. The run will presumably
generate somewhere around N*50 dead rows with the same key. Since
you're not doing any vacuums meanwhile, later updates will have to scan
through lots of dead rows looking for the one live row. In other words,
there's an O(N^2) behavior here that would occur without any parallelism
at all, just from the number of dead rows you're leaving behind.

> Is this the expected behaviour? Is it considered a problem?

It's the price we pay for MVCC semantics. If you have an application in
which the bottleneck is the number of updates that can be done on a
single row, you might be better off with a non-MVCC database.

regards, tom lane

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Steve Lane 2004-05-22 15:32:33 Need someone experienced in web/database scaling [OFF]
Previous Message Paul Gimpelj 2004-05-22 02:18:42 database design tools.