Re: Error while copying a large file in pg_rewind

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Error while copying a large file in pg_rewind
Date: 2017-07-03 13:53:35
Message-ID: 13255.1499090015@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com> writes:
> pg_rewind throws the following error when there is a file of large
> size available in the Slave server's data directory.

Hm. Before we add a bunch of code to deal with that, are we sure we
*want* it to copy such files? Seems like that's expending a lot of
data-transfer work for zero added value --- consider e.g. a server
with a bunch of old core files laying about in $PGDATA. Given that
it's already excluded all database-data-containing files, maybe we
should just set a cap on the plausible size of auxiliary files.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2017-07-03 14:00:53 Re: WIP patch for avoiding duplicate initdb runs during "make check"
Previous Message Heikki Linnakangas 2017-07-03 13:39:38 Re: AdvanceXLInsertBuffer vs. WAL segment compressibility