Re: refactoring basebackup.c

From: Jeevan Ladhe <jeevan(dot)ladhe(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, tushar <tushar(dot)ahuja(at)enterprisedb(dot)com>
Subject: Re: refactoring basebackup.c
Date: 2021-09-22 16:40:32
Message-ID: CAOgcT0Orx1jC=hfUPHz_MgtPh6uZjV0p8vzNX2sQo03f8rwmuw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Sep 21, 2021 at 10:27 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe
> <jeevan(dot)ladhe(at)enterprisedb(dot)com> wrote:
> > Yes, you are right here, and I could verify this fact with an experiment.
> > When autoflush is 1, the file gets less compressed i.e. the compressed
> file
> > is of more size than the one generated when autoflush is set to 0.
> > But, as of now, I couldn't think of a solution as we need to really
> advance the
> > bytes written to the output buffer so that we can write into the output
> buffer.
>
> I don't understand why you think we need to do that. What happens if
> you just change prefs->autoFlush = 1 to set it to 0 instead? What I
> think will happen is that you'll call LZ4F_compressUpdate a bunch of
> times without outputting anything, and then suddenly one of the calls
> will produce a bunch of output all at once. But so what? I don't see
> that anything in bbsink_lz4_archive_contents() would get broken by
> that.
>
> It would be a problem if LZ4F_compressUpdate() didn't produce anything
> and also didn't buffer the data internally, and expected us to keep
> the input around. That we would have difficulty doing, because we
> wouldn't be calling LZ4F_compressUpdate() if we didn't need to free up
> some space in that sink's input buffer. But if it buffers the data
> internally, I don't know why we care.
>

If I set prefs->autoFlush to 0, then LZ4F_compressUpdate() returns an
error: ERROR_dstMaxSize_tooSmall after a few iterations.

After digging a bit in the source of LZ4F_compressUpdate() in LZ4
repository, I
see that it throws this error when the destination buffer capacity, which in
our case is mysink->base.bbs_next->bbs_buffer_length is less than the
compress bound which it calculates internally by calling
LZ4F_compressBound()
internally for buffered_bytes + input buffer(CHUNK_SIZE in this case). Not
sure
how can we control this.

Regards,
Jeevan Ladhe

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jeevan Ladhe 2021-09-22 16:52:40 Re: refactoring basebackup.c
Previous Message Tom Lane 2021-09-22 16:30:58 Re: Release 14 Schedule