Re: jsonb format is pessimal for toast compression

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, "David E(dot) Wheeler" <david(at)justatheory(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Jan Wieck <jan(at)wi3ck(dot)info>
Subject: Re: jsonb format is pessimal for toast compression
Date: 2014-09-15 17:23:54
Message-ID: CAGTBQpaYrm4S2hvF822CFyUExAg2bnG2AHTrTdn6Jwt1Ry9Qrg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Sep 15, 2014 at 2:12 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> If not, I think the corner case is so obscure as to be not worth
> optimizing for. I can't imagine that more than a tiny minority of our
> users are going to have thousands of keys per datum.

Worst case is linear cost scaling vs number of keys, which depends on
the number of keys how expensive it is.

It would have an effect only on uncompressed jsonb, since compressed
jsonb already pays a linear cost for decompression.

I'd suggest testing performance of large small keys in uncompressed
form. It's bound to have a noticeable regression there.

Now, large small keys could be 200 or 2000, or even 20k. I'd guess
several should be tested to find the shape of the curve.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2014-09-15 17:30:46 Re: [v9.5] Custom Plan API
Previous Message Robert Haas 2014-09-15 17:17:28 Re: B-Tree support function number 3 (strxfrm() optimization)