|From:||Andres Freund <andres(at)anarazel(dot)de>|
|To:||Michael Paquier <michael(at)paquier(dot)xyz>|
|Cc:||Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Vladimir Leskov <vladimirlesk(at)yandex-team(dot)ru>|
|Subject:||Re: pglz performance|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On 2019-08-05 16:04:46 +0900, Michael Paquier wrote:
> On Fri, Aug 02, 2019 at 07:52:39PM +0200, Tomas Vondra wrote:
> > On Fri, Aug 02, 2019 at 10:12:58AM -0700, Andres Freund wrote:
> >> Why would they be stuck continuing to *compress* with pglz? As we
> >> fully retoast on write anyway we can just gradually switch over to the
> >> better algorithm. Decompression speed is another story, of course.
> > Hmmm, I don't remember the details of those patches so I didn't realize
> > it allows incremental recompression. If that's possible, that would mean
> > existing systems can start using it. Which is good.
> It may become a problem on some platforms though (Windows?), so
> patches to improve either the compression or decompression of pglz are
> not that much crazy as they are still likely going to be used, and for
> read-mostly switching to a new algo may not be worth the extra cost so
> it is not like we are going to drop it completely either.
What's the platform dependency that you're thinking of? And how's
compression speed relevant to "read mostly"? Switching would just
happen whenever tuple fields are changed. And it'll not have an
additional cost, because all it does is reduce the cost of a toast write
that'd otherwise happened with pglz.
> Linking to system libraries would make our maintenance much easier,
> and when it comes to have a copy of something else in the tree we
> would be stuck with more maintenance around it. These tend to rot
I don't think it's really our experience that they "rot easily".
> After that comes the case where the compression algo is not
> in the binary across one server to another, in which case we have an
> automatic ERROR in case of a mismatching algo, or FATAL for
> deompression of FPWs at recovery when wal_compression is used.
Huh? That's a failure case that only exists if you don't include it in
the tree (with the option to use an out-of-tree lib)?
|Next Message||Andres Freund||2019-08-06 07:11:53||Re: tableam vs. TOAST|
|Previous Message||Fabien COELHO||2019-08-06 06:58:45||Re: pgbench - implement strict TPC-B benchmark|