Skip site navigation (1) Skip section navigation (2)

Re: pg_dump additional options for performance

From: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-patches(at)postgresql(dot)org
Subject: Re: pg_dump additional options for performance
Date: 2008-07-27 16:57:17
Message-ID: 488CA8ED.6080908@commandprompt.com (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-patches
Simon Riggs wrote:
> On Sat, 2008-07-26 at 11:03 -0700, Joshua D. Drake wrote:
> 
>> 2. We have no concurrency which means, anyone with any database over 50G
>> has unacceptable restore times.
> 
> Agreed.

> Sounds good.
> 
> Doesn't help with the main element of dump time: one table at a time to
> one output file. We need a way to dump multiple tables concurrently,
> ending in multiple files/filesystems.

Agreed but that is a problem I understand with a solution I don't. I am 
all eyes on a way to fix that. One thought I had and please, be gentle 
in response was some sort of async transaction capability. I know that 
libpq has the ability to send async queries. Is it possible to do this:

send async(copy table to foo)
send async(copy table to bar)
send async(copy table to baz)

Where all three copies are happening in the background?

Sincerely,

Joshua D. Drake


In response to

Responses

pgsql-hackers by date

Next:From: Stephen R. van den BergDate: 2008-07-27 19:00:04
Subject: Re: Protocol 3, Execute, maxrows to return, impact?
Previous:From: Simon RiggsDate: 2008-07-27 09:37:34
Subject: Re: pg_dump additional options for performance

pgsql-patches by date

Next:From: Andrew DunstanDate: 2008-07-27 23:42:24
Subject: Re: pg_dump additional options for performance
Previous:From: Simon RiggsDate: 2008-07-27 09:37:34
Subject: Re: pg_dump additional options for performance

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group