Re: TOAST usage setting

From: "Zeugswetter Andreas ADI SD" <ZeugswetterA(at)spardat(dot)at>
To: "Bruce Momjian" <bruce(at)momjian(dot)us>, "Gregory Stark" <stark(at)enterprisedb(dot)com>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: TOAST usage setting
Date: 2007-05-31 13:28:48
Message-ID: E1539E0ED7043848906A8FF995BDA579021B2D02@m0143.s-mxs.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


> I tested EXTERN_TUPLES_PER_PAGE for values 4(default), 2, and 1:
>
> 4 15.596
> 2 15.197
> 1 14.6
>
> which is basically a 3% decrease from 4->2 and 2->1. The
> test script and result are here:
>
> http://momjian.us/expire/TOAST2/
>
> shared_buffers again was 32MB so all the data was in memory.

Thanks for the test. (The test is for 1 row that is 100k wide.)

It is good. It shows, that we even see a small advantage in the
everything cached case.

What we don't have yet is numbers for whether EXTERN_TUPLES_PER_PAGE=1
substantially increases the toast table size for real life scenarios,
what happens in the worst case (~48% wastage compared to previous 12%),
and whether 1 row per page works well with autovacuum ?

The bad case (with EXTERN_TUPLES_PER_PAGE=1) is when most toast tuples
have a size over TOAST_MAX_CHUNK_SIZE_for_2+1 but enough smaller than a
page that we care about the wasteage. Maybe we can special case that
range.
Maybe determine (and lock) the freespace of any cheap-to-get-at non
empty page (e.g. the current insert target page) and splitting the toast
data there.

Andreas

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Teodor Sigaev 2007-05-31 13:42:04 Re: pgsql: Make large sequential scans and VACUUMs work in a limited-size
Previous Message Peter Eisentraut 2007-05-31 12:56:00 Command tags in create/drop scripts