From: | Venkata B Nagothi <nag1010(at)gmail(dot)com> |
---|---|
To: | Sameer Kumar <sameer(dot)kumar(at)ashnik(dot)com> |
Cc: | Patrick B <patrickbakerbr(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Running pg_dump from a slave server |
Date: | 2016-08-17 04:00:47 |
Message-ID: | CAEyp7J-TJWqwQAj06y68KujuX9cYU2y=QrTJgZCWqYggbukycg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Aug 17, 2016 at 1:31 PM, Sameer Kumar <sameer(dot)kumar(at)ashnik(dot)com>
wrote:
>
>
> On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbakerbr(at)gmail(dot)com>
> wrote:
>
>> Hi guys,
>>
>> I'm using PostgreSQL 9.2 and I got one master and one slave with
>> streaming replication.
>>
>> Currently, I got a backup script that runs daily from the master, it
>> generates a dump file with 30GB of data.
>>
>> I changed the script to start running from the slave instead the master,
>> and I'm getting this errors now:
>>
>> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>>> failed.
>>> pg_dump: Error message from server: ERROR: canceling statement due to
>>> conflict with recovery
>>> DETAIL: User was holding a relation lock for too long.
>>
>>
> Looks like while your pg_dump sessions were trying to fetch the data,
> someone fired a DDL or REINDEX or VACUUM FULL on the master database.
>
>>
>> Isn't that possible? I can't run pg_dump from a slave?
>>
>
> Well you can do that, but it has some limitation. If you do this quite
> often, it would be rather better to have a dedicated standby for taking
> backups/pg_dumps. Then you can set max_standby_streaming_delay and
> max_standby_archiving_delay to -1. But I would not recommend doing this if
> you use your standby for other read queries or for high availability.
>
> Another option would be avoid such queries which causes Exclusive Lock on
> the master database during pg_dump.
>
Another work around could be to pause the recovery, execute the pg_dump and
then, resume the recovery process. Not sure if this work around has been
considered.
You can consider executing "pg_xlog_replay_pause()" before executing
pg_dump and then execute "pg_xlog_replay_resume()" after the pg_dump
process completes.
Regards,
Venkata B N
Fujitsu Australia
From | Date | Subject | |
---|---|---|---|
Next Message | Venkata B Nagothi | 2016-08-17 04:07:29 | Re: schema advice for event stream with tagging and filtering |
Previous Message | Patrick B | 2016-08-17 03:50:36 | Re: Running pg_dump from a slave server |