Re: Massive table (500M rows) update nightmare

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Leo Mannhart" <leo(dot)mannhart(at)beecom(dot)ch>, "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Massive table (500M rows) update nightmare
Date: 2010-01-07 15:18:54
Message-ID: 4B45A6FE020000250002DEBC@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Leo Mannhart <leo(dot)mannhart(at)beecom(dot)ch> wrote:

> You could also try to just update the whole table in one go, it is
> probably faster than you expect.

That would, of course, bloat the table and indexes horribly. One
advantage of the incremental approach is that there is a chance for
autovacuum or scheduled vacuums to make space available for re-use
by subsequent updates.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Gary Warner 2010-01-07 15:23:17 "large" spam tables and performance: postgres memory parameters
Previous Message Grzegorz Jaśkiewicz 2010-01-07 15:16:07 Re: Air-traffic benchmark