On Tue, Apr 17, 2007 at 04:13:36PM -0500, David Hinkle wrote:
> I have a table where I store email, the bodies are mostly kept in a
> toast table. The toast table is 940 Meg in size. The whole database
> is about 1.2 Gig in size. When I back the database up using pg_dump in
> custom output mode, I pipe the output into gzip. My backups are only
> about 600 meg in size. From this, I assume the that toe toast table
> isn't getting compressed.
How are you measuring the toast table and database sizes? Have you
taken indexes and uncompressible data and metadata into account?
The database compresses only certain data, whereas when you pipe a
dump into gzip you get compression on the entire dump.
Some of the space might be taken up by dead rows and unused item
pointers. How often do you vacuum? What does "VACUUM VERBOSE
> Is there any way I can tell for sure if the messages from this column
> are being stored compressed?
You could look at a hex/ascii dump of the base and toast tables --
you might see runs of legible text but it should be obvious where
the data is compressed. See the TOAST section in the documentation
for more information about how and when data is compressed:
Note that "The TOAST code is triggered only when a row value to be
stored in a table is wider than BLCKSZ/4 bytes (normally 2 kB)."
And I'm no expert at compression algorithms but it's possible that
the "fairly simple and very fast member of the LZ family of compression
techniques" isn't as space-efficient as the algorithm that gzip
uses (LZ77 according to its manual page). Maybe one of the developers
> I know I can set the compression settings using the "ALTER TABLE
> ALTER SET STORAGE" syntax, but is there a way I can see what this
> value is currently set to?
You could query pg_attribute.attstorage:
In response to
pgsql-performance by date
|Next:||From: Mario Weilguni||Date: 2007-04-23 08:53:38|
|Subject: Re: postgres: 100% CPU utilization|
|Previous:||From: Andreas Tille||Date: 2007-04-23 05:39:02|
|Subject: Re: postgres: 100% CPU utilization |