Re: [GENERAL] Concurrency problem building indexes

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Wes <wespvp(at)syntegra(dot)com>, Zeugswetter Andreas DCP SD <ZeugswetterA(at)spardat(dot)at>, "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [GENERAL] Concurrency problem building indexes
Date: 2006-04-25 17:24:05
Message-ID: 22427.1145985845@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> I'm late to this thread, but maybe we can make the process of storing
> the new data in pg_class take a lock using LockObject() or something
> like that to serialize the access to the pg_class row.

I'm inclined to think that the right solution is to fix UpdateStats and
setRelhasindex so that they don't use simple_heap_update, but call
heap_update directly and cope with HeapTupleUpdated (by looping around
and trying the update from scratch).

Another thing that's annoying here is that we update the pg_class row
twice in some cases --- really we ought to try to get this down to one
update. (So we'd only need one instance of the looping logic not two.)
I'm not entirely clear on the cleanest way to do that, but am currently
thinking that btbuild and friends ought to pass back the tuple counts
they obtained, rather than writing them into the catalogs for
themselves. IndexCloseAndUpdateStats ought to go away --- the index AM
never had any business doing that for itself.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Jim C. Nasby 2006-04-25 17:25:35 Catalog Access (was: [GENERAL] Concurrency problem building indexes)
Previous Message Tom Lane 2006-04-25 16:55:50 Re: FOR UPDATE lock problem ?