Re: Significantly larger toast tables on 8.4?

From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "Philip Warner" <pjw(at)rhyme(dot)com(dot)au>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Gregory Stark" <stark(at)enterprisedb(dot)com>, "Stephen R(dot) van den Berg" <srb(at)cuci(dot)nl>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Significantly larger toast tables on 8.4?
Date: 2009-01-04 05:15:09
Message-ID: 34d269d40901032115v7af81d92vd0fcee7fcec129f8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Jan 3, 2009 at 21:56, Philip Warner <pjw(at)rhyme(dot)com(dot)au> wrote:
> Alex Hunsaker wrote:
>> For the record I just imported a production database that sits at
>> about ~20G right now with *zero* size increase (rounding to the
>> nearest gigabyte). That's with basically the exact same schema just
>> different data.
>>
> Guessing you don't have many plain text rows > 1M.

Probably not.

>> I don't suppose you could export some random rows and see if you see
>> any size increase for your data? My gut says you wont see an
>> increase.
>>
>
> Will see what I can do.

Actually assuming they dont have any multibyte chars you should just
be able to do something like the below on your existing database.

-- show anything we save a megabyte on

select die_id, pg_size_pretty(savings) from
( select length(debug) - pg_column_size(debug) as savings, die_id from
fooa) as foo
where savings > 1024*1024 order by savings desc;

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Mark Kirkwood 2009-01-04 08:03:09 Re: Latest version of Hot Standby patch: unexpected error querying standby
Previous Message Philip Warner 2009-01-04 04:56:57 Re: Significantly larger toast tables on 8.4?