On Sat, May 29, 2010 at 5:44 PM, Sakari A. Maaranen <sam(at)iki(dot)fi> wrote:
> For now, I can work around this on the client side by splitting the
> updates into a million separate transactions instead of a single big
> one. Will be slow, but it should work.
In general, it's better to group things into larger transactions - the
case where the pending trigger queue exhausts system memory is an
unfortunate exception. You might want to think about, say, a thousand
transactions of a thousand records, instead of a million transactions
with one record each.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company