Re: Would it be possible to have parallel archiving?

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: hubert depesz lubaczewski <depesz(at)depesz(dot)com>
Cc: pgsql-hackers mailing list <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Would it be possible to have parallel archiving?
Date: 2018-08-28 12:32:14
Message-ID: 20180828123214.GF3326@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greetings,

* hubert depesz lubaczewski (depesz(at)depesz(dot)com) wrote:
> I'm in a situation where we quite often generate more WAL than we can
> archive. The thing is - archiving takes long(ish) time but it's
> multi-step process and includes talking to remote servers over network.
>
> I tested that simply by running archiving in parallel I can easily get
> 2-3 times higher throughput.
>
> But - I'd prefer to keep postgresql knowing what is archived, and what
> not, so I can't do the parallelization on my own.
>
> So, the question is: is it technically possible to have parallel
> archivization, and would anyone be willing to work on it (sorry, my
> c skills are basically none, so I can't realistically hack it myself)

Not entirely sure what the concern is around "postgresql knowing what is
archived", but pgbackrest already does exactly this parallel archiving
for environments where the WAL volume is larger than a single thread can
handle, and we've been rewriting it in C specifically to make it fast
enough to be able to keep PG up-to-date regarding what's been pushed
already.

Happy to discuss it further, as well as other related topics and how
backup software could be given better APIs to tell PG what's been
archived, etc.

Thanks!

Stephen

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2018-08-28 12:59:45 Re: BUG #15346: Replica fails to start after the crash
Previous Message Stephen Frost 2018-08-28 12:23:11 Re: BUG #15346: Replica fails to start after the crash