From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Richard Huxton <dev(at)archonet(dot)com> |
Cc: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, pgsql-performance(at)postgresql(dot)org, "Jignesh K(dot) Shah" <J(dot)K(dot)Shah(at)sun(dot)com>, Greg Smith <gsmith(at)gregsmith(dot)com> |
Subject: | Re: Benchmark Data requested |
Date: | 2008-02-05 14:54:28 |
Message-ID: | 1202223268.4252.707.camel@ebony.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, 2008-02-05 at 14:43 +0000, Richard Huxton wrote:
> Simon Riggs wrote:
> > On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:
> >>
> >> Le lundi 04 février 2008, Jignesh K. Shah a écrit :
>
> >>> Multiple table loads ( 1 per table) spawned via script is bit better
> >>> but hits wal problems.
> >> pgloader will too hit the WAL problem, but it still may have its benefits, or
> >> at least we will soon (you can already if you take it from CVS) be able to
> >> measure if the parallel loading at the client side is a good idea perf. wise.
> >
> > Should be able to reduce lock contention, but not overall WAL volume.
>
> In the case of a bulk upload to an empty table (or partition?) could you
> not optimise the WAL away? That is, shouldn't the WAL basically be a
> simple transformation of the on-disk blocks? You'd have to explicitly
> sync the file(s) for the table/indexes of course, and you'd need some
> work-around for WAL shipping, but it might be worth it for you chaps
> with large imports.
Only by locking the table, which serializes access, which then slows you
down or at least restricts other options. Plus if you use pg_loader then
you'll find only the first few rows optimized and all the rest not.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2008-02-05 15:05:18 | Re: Benchmark Data requested |
Previous Message | Matthew | 2008-02-05 14:53:25 | Re: Benchmark Data requested |