Triggers and scalability in high transaction tables.

From: Tim Uckun <timuckun(at)gmail(dot)com>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Triggers and scalability in high transaction tables.
Date: 2015-02-26 21:54:31
Message-ID: CAGuHJrOXS=jqehHriS01FybxPpWr9miqZZO6PW42xeY1BVsV8A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I want to write a trigger which runs semi-complicated code after each
insert. I have done some reading and from what I can gather this could
cause problems because after insert triggers "don't spill to the disk" and
can cause queue problems. Many people suggest LISTEN NOTIFY but that's
not going to help me because my daemons could be offline and I would lose
records.

I have two questions.

There are some hints out there that it could be possible to do asynchronous
triggers based on dblink but I haven't seen any documentation or examples
of this. Is there a writeup someplace about this?

Secondly I had the idea of "partitioning" the trigger processing by
partitioning the table and then putting a trigger on each child table.
This way theoretically I could be running the triggers in parallel. Is my
presumption correct here? If I only have one table the trigger calls get
queued up one at a time but if I partition my table into N tables I am
running N triggers simultaneously?

Thanks.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Merlin Moncure 2015-02-26 22:03:06 Re: Triggers and scalability in high transaction tables.
Previous Message Merlin Moncure 2015-02-26 20:33:25 Re: [HACKERS] Composite index and min()