Re: WIP/PoC for parallel backup

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Asif Rehman <asifr(dot)rehman(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: WIP/PoC for parallel backup
Date: 2019-10-04 12:07:42
Message-ID: CA+Tgmobd+Lh8sO7V4wow3-9cfOf45MoyDh3pHP+3mp+8VgNh_w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Oct 4, 2019 at 7:02 AM Asif Rehman <asifr(dot)rehman(at)gmail(dot)com> wrote:
> Based on my understanding your main concern is that the files won't be distributed fairly i.e one worker might get a big file and take more time while others get done early with smaller files? In this approach I have created a list of files in descending order based on there sizes so all the big size files will come at the top. The maximum file size in PG is 1GB so if we have four workers who are picking up file from the list one by one, the worst case scenario is that one worker gets a file of 1GB to process while others get files of smaller size. However with this approach of descending files based on size and handing it out to workers one by one, there is a very high likelihood of workers getting work evenly. does this address your concerns?

Somewhat, but I'm not sure it's good enough. There are lots of reasons
why two processes that are started at the same time with the same
amount of work might not finish at the same time.

I'm also not particularly excited about having the server do the
sorting based on file size. Seems like that ought to be the client's
job, if the client needs the sorting.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2019-10-04 12:25:56 Re: let's kill AtSubStart_Notify
Previous Message Fujii Masao 2019-10-04 12:03:18 Re: recovery_min_apply_delay in archive recovery causes assertion failure in latch