Re: Massive table (500M rows) update nightmare

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: <pgsql-performance(at)postgresql(dot)org>, "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
Subject: Re: Massive table (500M rows) update nightmare
Date: 2010-01-08 14:11:19
Message-ID: 4B46E8A7020000250002E012@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca> wrote:

> Already done in an earlier post

Perhaps I misunderstood; I thought that post mentioned that the plan
was one statement in an iteration, and that the cache would have
been primed by a previous query checking whether there were any rows
to update. If that was the case, it might be worthwhile to look at
the entire flow of an iteration.

Also, if you ever responded with version and configuration
information, I missed it. The solution to parts of what you
describe would be different in different versions. In particular,
you might be able to solve checkpoint-related lockup issues and then
improve performance by using bigger batches. Right now I would be
guessing at what might work for you.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Rui Carvalho 2010-01-08 16:06:44 Re: Array comparison
Previous Message Eduardo Morras 2010-01-08 11:19:38 Re: Massive table (500M rows) update nightmare