|From:||Paul Ramsey <pramsey(at)cleverelephant(dot)ca>|
|To:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|Cc:||Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers(at)lists(dot)postgresql(dot)org|
|Subject:||Re: Compressed TOAST Slicing|
|Views:||Raw Message | Whole Thread | Download mbox|
On Thu, Nov 1, 2018 at 4:02 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Paul Ramsey <pramsey(at)cleverelephant(dot)ca> writes:
> > On Thu, Nov 1, 2018 at 2:29 PM Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> >> and secondly, why we wouldn't consider
> >> handling a non-zero offset. A non-zero offset would, of course, still
> >> require decompressing from the start and then just throwing away what we
> >> skip over, but we're going to be doing that anyway, aren't we? Why not
> >> stop when we get to the end, at least, and save ourselves the trouble of
> >> decompressing the rest and then throwing it away.
> > I was worried about changing the pg_lz code too much because it scared
> > but debugging some stuff made me read it more closely so I fear it less
> > now, and doing interior slices seems not unreasonable, so I will give it
> > try.
> I think Stephen was just envisioning decompressing from offset 0 up to
> the end of what's needed, and then discarding any data before the start
> of what's needed; at least, that's what'd occurred to me.
Understood, that makes lots of sense and is a very small change, it turns
Allocating just what is needed also makes things faster yet, which is nice,
and no big surprise.
Some light testing seems to show no obvious regression in speed of
decompression for the usual "decompress it all" case.
|Next Message||Paul Ramsey||2018-11-02 18:25:15||Re: Compressed TOAST Slicing|
|Previous Message||Merlin Moncure||2018-11-02 17:37:35||Re: WIP Patch: Add a function that returns binary JSONB as a bytea|