Steve Holdoway wrote:
> Yes, I know it's not possible, but can anyone suggest an alternative for
> this problem?
> I've written a very simple trigger-driven replication system, which
> works in stages. First the trigger generates an entry in a log table
> which is a fully formatted sql command... insert into/delete from, etc.
> Secondly, this table is transferred to the receiving database, and
> cleared down. This all works fine.
> On the receiving end, there is a cron job that processes all of the
> commands in this table. However, this is written as a plpgsql function,
> so it's 'all or nothing'... ie any errors in the data mean that all
> successful updates preceeding this error are rolled back. This makes
> finding and debugging the data errors extremely difficult, but, more
> importantly, stops the update process cold.
> I have tried calling a child function from the parent to perform the
> update in batches, but it still exhibits the same 'all or nothing'
> Can anyone suggest a way that I can get around this?
The easisest way is probably let your cron job be a small client program
using one of the available interfaces and call a plsql function from
there, once for each batch. Each call followed by a commit. This
approach will give you full control, both with respect to transactions
In response to
pgsql-hackers by date
|Next:||From: Prem Gopalan||Date: 2004-07-02 11:42:54|
|Subject: Re: Postgres Crashes |
|Previous:||From: Christopher Kings-Lynne||Date: 2004-07-02 09:34:33|
|Subject: Re: compile errors in new PL/Pler|