Re: Tuple storage overhead

From: Szymon Guz <mabewlun(at)gmail(dot)com>
To: PostgreSQL general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Tuple storage overhead
Date: 2010-04-16 09:59:38
Message-ID: q2pe4edc9361004160259gb202f2d0wd226357ea8d19d50@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

2010/4/16 Peter Bex <Peter(dot)Bex(at)xs4all(dot)nl>

> Hi all,
>
> I have a table with three columns: one integer and two doubles.
> There are two indexes defined (one on the integer and one on one
> of the doubles). This table stores 700000 records, which take up
> 30 Mb according to pg_relation_size(), and the total relation size
> is 66 Mb.
>
> I expected the disk space usage to be halved by changing the doubles
> to floats, but it only dropped by 5 MB! (I tried various approaches,
> including dumping and restoring to make sure there was no uncollected
> garbage lying around)
>
> Someone on IRC told me the per-tuple storage overhead is pretty big,
> and asked me to create a similar table containing only integers:
>
> db=# create table testing ( x integer );
> db=# INSERT INTO testing (x) VALUES (generate_series(1, 700000));
> dacolt_development=# SELECT
> pg_size_pretty(pg_total_relation_size('testing'));
> pg_size_pretty
> ----------------
> 24 MB
> (1 row)
> db=# SELECT pg_size_pretty(pg_relation_size('testing'));
> pg_size_pretty
> ----------------
> 24 MB
> db=# CREATE INDEX testing_1 ON testing (x);
> db=# CREATE INDEX testing_2 ON testing (x);
> db=# SELECT pg_size_pretty(pg_relation_size('testing'));
> pg_size_pretty
> ----------------
> 24 MB
> (1 row)
> db=# SELECT pg_size_pretty(pg_total_relation_size('testing'));
> pg_size_pretty
> ----------------
> 54 MB
> (1 row)
>
> Is there a way to reduce the per-tuple storage overhead?
>
> The reason I'm asking is that I have tons of tables like this,
> and some data sets are much bigger than this. In a relatively
> simple testcase I'm importing data from text files which are
> 5.7 Gb in total, and this causes the db size to grow to 34Gb.
>
> This size is just one small sample of many such datasets that I
> need to import, so disk size is really an important factor.
>
>
File pages are not fully filled from the start as that could result in bad
performance of queries later. If you want to have those pages fully filled,
then you can set the table parameter like fillfactor. But be aware that many
queries can be much slower and changing those parameters isn't usually a
good idea. If you won't ever do any updates on those tables then it could
work.

http://www.postgresql.org/docs/8.4/interactive/sql-createtable.html
http://www.postgresql.org/docs/8.4/interactive/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS

regards
Szymon Guz

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Peter Bex 2010-04-16 10:13:27 Re: Tuple storage overhead
Previous Message raghavendra t 2010-04-16 09:55:57 tar error, in pg_start_backup()