Re: pg_dump

From: "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>
To: Volodymyr Blahoi <vblagoi(at)gmail(dot)com>
Cc: "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: Re: pg_dump
Date: 2020-05-08 14:08:08
Message-ID: CAKFQuwbf8px_QgEG2LhCS_cpi47=j0ttS5Xkk3uLV4FbHHpHHg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Friday, May 8, 2020, Volodymyr Blahoi <vblagoi(at)gmail(dot)com> wrote:

> When you create 5000 schemas and 100 tables with 10 different data types
> in every schema and execute pg_dump -a --inserts -t schema1.table2 dbname,
> it executing around 2 min. How to make it more faster?
>

This isn’t a bug...and anyway you didn’t specify the important bit which is
how big that table is...bUt probably “get better disk drive hardware” is an
answer.

David J.

In response to

  • pg_dump at 2020-05-08 10:18:36 from Volodymyr Blahoi

Browse pgsql-bugs by date

  From Date Subject
Next Message Fabio Pardi 2020-05-08 14:17:23 Re: pg_dump
Previous Message Volodymyr Blahoi 2020-05-08 10:18:36 pg_dump