Re: Error while copying a large file in pg_rewind

From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Error while copying a large file in pg_rewind
Date: 2017-07-03 13:20:36
Message-ID: CAB7nPqQRndpvCFu95wYoSFNYZtohqL9b_rDvBvGYjmix5jLOhg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jul 3, 2017 at 8:22 PM, Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com> wrote:
> pg_rewind throws the following error when there is a file of large
> size available in the Slave server's data directory.

Oops.

> I guess we've to change the data type to bigint. Also, we need some
> implementation of ntohl() for 8-byte data types. I've attached a
> script to reproduce the error and a draft patch.

pg_basebackup/ with fe_recvint64() has its own way to do things, as
does the large object things in libpq. I would think that at least on
HEAD things could be gathered with a small set of routines. It is
annoying to have a third copy of the same thing. OK that's not
invasive but src/common/ would be a nice place to put things.

- if (PQgetlength(res, 0, 1) != sizeof(int32))
+ if (PQgetlength(res, 0, 1) != sizeof(long long int))
This had better be int64.
--
Michael

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2017-07-03 13:21:51 Re: WIP patch for avoiding duplicate initdb runs during "make check"
Previous Message Alvaro Herrera 2017-07-03 13:02:32 Re: WIP patch for avoiding duplicate initdb runs during "make check"