Re: pg_lzcompress strategy parameters

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Gregory Stark <stark(at)enterprisedb(dot)com>
Cc: "Jan Wieck" <JanWieck(at)Yahoo(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: pg_lzcompress strategy parameters
Date: 2007-08-05 22:30:32
Message-ID: 26793.1186353032@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Gregory Stark <stark(at)enterprisedb(dot)com> writes:
> (Incidentally, this means what I said earlier about uselessly trying to
> compress objects below 256 is even grosser than I realized. If you have a
> single large object which even after compressing will be over the toast target
> it will force *every* varlena to be considered for compression even though
> they mostly can't be compressed. Considering a varlena smaller than 256 for
> compression only costs a useless palloc, so it's not the end of the world but
> still. It does seem kind of strange that a tuple which otherwise wouldn't be
> toasted at all suddenly gets all its fields compressed if you add one more
> field which ends up being stored externally.)

Yeah. It seems like we should modify the first and third loops so that
if (after compression if any) the largest attribute is *by itself*
larger than the target threshold, then we push it out to the toast table
immediately, rather than continuing to compress other fields that might
well not need to be touched.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Gregory Stark 2007-08-05 23:40:19 Problem with locks
Previous Message Gregory Stark 2007-08-05 21:59:20 Autovacuum and toast tables