Re: WIP/PoC for parallel backup

From: Kashif Zeeshan <kashif(dot)zeeshan(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>, Ahsan Hadi <ahsan(dot)hadi(at)gmail(dot)com>, Asif Rehman <asifr(dot)rehman(at)gmail(dot)com>, Jeevan Chalke <jeevan(dot)chalke(at)enterprisedb(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: WIP/PoC for parallel backup
Date: 2020-04-02 11:54:49
Message-ID: CAKfXphqMjDO+CCiSSPweQnxfAW95kJDbZ4Nwa5uh46HWdrDURQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Apr 2, 2020 at 4:48 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Thu, Apr 2, 2020 at 7:30 AM Kashif Zeeshan <
> kashif(dot)zeeshan(at)enterprisedb(dot)com> wrote:
>
>> The backup failed with errors "error: could not connect to server: could
>> not look up local user ID 1000: Too many open files" when the
>> max_wal_senders was set to 2000.
>> The errors generated for the workers starting from backup worke=1017.
>>
>
> It wasn't the fact that you set max_wal_senders to 2000. It was the fact
> that you specified 1990 parallel workers. By so doing, you overloaded the
> machine, which is why everything failed. That's to be expected.
>
> Thanks alot Robert,
In this case the backup folder was not being emptied as the backup was
failed, the cleanup should be done in this case too.

> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

--
Regards
====================================
Kashif Zeeshan
Lead Quality Assurance Engineer / Manager

EnterpriseDB Corporation
The Enterprise Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2020-04-02 12:23:27 Re: adding partitioned tables to publications
Previous Message Robert Haas 2020-04-02 11:48:33 Re: WIP/PoC for parallel backup