Re: pg_dump

From: Fabio Pardi <f(dot)pardi(at)portavita(dot)eu>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: pg_dump
Date: 2020-05-08 14:17:23
Message-ID: eb376899-c163-09ef-61a1-f1746fcfb6e5@portavita.eu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On 08/05/2020 12:18, Volodymyr Blahoi wrote:
> When you create 5000 schemas and 100 tables with 10 different data types in every schema and execute pg_dump -a --inserts -t schema1.table2 dbname, it executing around 2 min. How to make it more faster?

This is not the right ML to ask to, you might want to write to the "performance" ML instead.

About your problem: one solution might be to make sure you are writing your dump to a separate set or disks than where your database reads data from.

regards,

fabio pardi

In response to

  • pg_dump at 2020-05-08 10:18:36 from Volodymyr Blahoi

Browse pgsql-bugs by date

  From Date Subject
Next Message David G. Johnston 2020-05-08 14:17:38 Re: BUG #16424: COPY Command fails for CSV formath
Previous Message David G. Johnston 2020-05-08 14:08:08 Re: pg_dump