Re: Move pg_attribute.attcompression to earlier in struct for reduced size?

From: Andres Freund <andres(at)anarazel(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Michael Paquier <michael(at)paquier(dot)xyz>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: Move pg_attribute.attcompression to earlier in struct for reduced size?
Date: 2021-05-27 01:54:15
Message-ID: 20210527015415.ctuj4yrwnjip5kve@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2021-05-26 20:35:46 -0400, Tom Lane wrote:
> Andres Freund <andres(at)anarazel(dot)de> writes:
> > The efficiency bit is probably going to be swamped by the addition of
> > the compression handling, given the amount of additional work we're now
> > doing in in reform_and_rewrite_tuple().
>
> Only if the user has explicitly requested a change of compression, no?

Oh, it'll definitely be more expensive in that case - but that seems
fair game. What I was wondering about was whether VACUUM FULL would be
measurably slower, because we'll now call toast_get_compression_id() on
each varlena datum. It's pretty easy for VACUUM FULL to be CPU bound
already, and presumably this'll add a bit.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2021-05-27 02:07:53 Re: Move pg_attribute.attcompression to earlier in struct for reduced size?
Previous Message houzj.fnst@fujitsu.com 2021-05-27 01:42:07 RE: Parallel Inserts in CREATE TABLE AS