Re: Refactor pg_dump as a library?

From: David Steele <david(at)pgmasters(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andreas Karlsson <andreas(at)proxel(dot)se>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Jakob Egger <jakob(at)eggerapps(dot)at>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Refactor pg_dump as a library?
Date: 2016-04-14 17:40:21
Message-ID: 570FD605.5030501@pgmasters.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 4/14/16 1:33 PM, Tom Lane wrote:
> David Steele <david(at)pgmasters(dot)net> writes:
>> On 4/14/16 7:16 AM, Andreas Karlsson wrote:
>>> I am personally not a fan of the pg_get_Xdef() functions due to their
>>> heavy reliance on the syscache which feels rather unsafe in combination
>>> with concurrent DDL.
>
>> As far as I know pg_dump share locks everything before it starts so
>> there shouldn't be issues with concurrent DDL. Try creating a new
>> inherited table with FKs, etc. during a pg_dump and you'll see lots of
>> fun lock waits.
>
> I think pg_dump is reasonably proof against DDL on tables. It is not
> at all proof against DDL on other sorts of objects, such as functions,
> because of the fact that the syscache will follow catalog updates that
> occur after pg_dump's transaction snapshot.

Hmm, OK. I'll need to go look at that.

I thought that the backend running the pg_dump would fill it's syscache
when it took all the locks and then not update them during the actual
dump. If that's not the case then it's a bit scary, yes.

It seems to make a good case for physical backups vs. logical.

--
-David
david(at)pgmasters(dot)net

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2016-04-14 18:00:23 Re: [COMMITTERS] pgsql: Code cleanup in the wake of recent LWLock refactoring.
Previous Message Tom Lane 2016-04-14 17:33:36 Re: Refactor pg_dump as a library?