Re: Parallel pg_dump for 9.1

From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, Joachim Wieland <joe(at)mcknight(dot)de>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Parallel pg_dump for 9.1
Date: 2010-03-30 06:39:09
Message-ID: 4BB19C8D.6090007@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:
> Josh Berkus <josh(at)agliodbs(dot)com> writes:
>> On 3/29/10 7:46 AM, Joachim Wieland wrote:
>>> I actually assume that whenever people are interested
>>> in a very fast dump, it is because they are doing some maintenance
>>> task (like migrating to a different server) that involves pg_dump. In
>>> these cases, they would stop their system anyway.
>
>> Actually, I'd say that there's a broad set of cases of people who want
>> to do a parallel pg_dump while their system is active. Parallel pg_dump
>> on a stopped system will help some people (for migration, particularly)
>> but parallel pg_dump with snapshot cloning will help a lot more people.
>
> I doubt that. My thought about it is that parallel dump will suck
> enough resources from the source server, both disk and CPU, that you
> would never want to use it on a live production machine. Not even at
> 2am. And your proposed use case is hardly a "broad set" in any case.
> Thus, Joachim's approach seems perfectly sane from here. I certainly
> don't see that there's an argument for spending 10x more development
> effort to pick up such use cases.
>
> Another question that's worth asking is exactly what the use case would
> be for parallel pg_dump against a live server, whether the snapshots are
> synchronized or not. You will not be able to use that dump as a basis
> for PITR, so there is no practical way of incorporating any changes that
> occur after the dump begins. So what are you making it for? If it's a
> routine backup for disaster recovery, fine, but it's not apparent why
> you want max speed and to heck with live performance for that purpose.
> I think migration to a new server version (that's too incompatible for
> PITR or pg_migrate migration) is really the only likely use case.

I really doubt that - on fast systems pg_dump is completely CPU
bottlenecked and typical 1-2U typical hardware you get these days has
8-16 cores so simply dedicating a few cores to dumping the database
during quieter times is very realistic.
Databases are growing larger and larger and the single threaded nature
of pg_dump makes it very hard to even stay withing reasonable time
limits for doing the backup.

Stefan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Zhai Boxuan 2010-03-30 07:26:23 I am interested in the MERGE command implementation as my gSoC project
Previous Message Stefan Kaltenbrunner 2010-03-30 06:32:18 Re: Alpha release this week?