|From:||Dean Rasheed <dean(dot)a(dot)rasheed(at)googlemail(dot)com>|
|Subject:||Scaling up deferred unique checks and the after trigger queue|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
I've started looking at the following TODO item:
"Improve deferrable unique constraints for cases with many conflicts"
and Tom's suggestion that the rows to be checked can be stored in a
bitmap, which would become lossy when the number of rows becomes large
enough. There is also another TODO item:
"Add deferred trigger queue file"
to prevent the trigger queue from exhausting backend memory.
I've got some prototype code which attempts to replace all the
after-triggers-queue stuff with TID bitmaps (not just for deferred
constraint triggers). This would solve the memory-usage problem without
resorting to file storage, and makes it easier to then optimise constraint
checks by doing a bulk check if the number of rows is large enough.
The initial results are encouraging, but I'm still pretty new to a lot of
this code, so I wanted to check that this is a sane thing to try to do.
For UPDATEs, I'm storing the old tid in the bitmap and relying on its ctid
pointer to retrieve the new tuple for the trigger function. AFAICS
heap_update() always links the old and new tuples in this way.
I'm aware that the "triggers on columns" patch is going to be a problem
for this. I haven't looked at it in any detail, but I suspect that it won't
work with a lossy queue, because the information about exactly which
rows to trigger on is only known at update time. So maybe I could fall
back on a tuplestore, spilling to disk in that case?
|Next Message||David E. Wheeler||2009-10-07 17:28:08||Re: Concurrency testing|
|Previous Message||Alvaro Herrera||2009-10-07 17:07:35||Re: Concurrency testing|