Re: Massive table (500M rows) update nightmare

From: "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Massive table (500M rows) update nightmare
Date: 2010-01-08 06:14:58
Message-ID: hi6ifm$1bi8$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> It might well be checkpoints. Have you tried cranking up checkpoint
> segments to something like 100 or more and seeing how it behaves then?

No I haven't, althugh it certainly make sense - watching the process run,
you get this sense that the system occaisionally pauses to take a deep, long
breath before returning to work frantically ;D

Checkpoint_segments are currently set to 64. The DB is large and is on a
constant state of receiving single-row updates as multiple ETL and
refinement processes run continuously.

Would you expect going to 100 or more to make an appreciable difference, or
should I be more aggressive?
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2010-01-08 06:21:31 Re: Massive table (500M rows) update nightmare
Previous Message Carlo Stonebanks 2010-01-08 06:02:25 Re: Massive table (500M rows) update nightmare