Re: PostgreSQL 8.4 performance tuning questions

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Scott Carey" <scott(at)richrelevance(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Matthew Wakeling" <matthew(at)flymine(dot)org>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: PostgreSQL 8.4 performance tuning questions
Date: 2009-07-30 20:58:27
Message-ID: 4A71C3230200002500029162@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Scott Carey <scott(at)richrelevance(dot)com> wrote:

> Now, what needs to be known with the pg_dump is not just how fast
> compression can go (assuming its gzip) but also what the duty cycle
> time of the compression is. If it is single threaded, there is all
> the network and disk time to cut out of this, as well as all the CPU
> time that pg_dump does without compression.

Well, I established a couple messages back on this thread that pg_dump
piped to psql to a database on the same machine writes the 70GB
database to disk in two hours, while pg_dump to a custom format file
at default compression on the same machine writes the 50GB file in six
hours. No network involved, less disk space written. I'll try it
tonight at -Z0.

One thing I've been wondering about is what, exactly, is compressed in
custom format. Is it like a .tar.gz file, where the compression is a
layer over the top, or are individual entries compressed? If the
latter, what's the overhead on setting up each compression stream? Is
there some minimum size before that kicks in? (I know, I should go
check the code myself. Maybe in a bit. Of course, if someone already
knows, it would be quicker....)

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Carey 2009-07-30 21:20:05 Re: PostgreSQL 8.4 performance tuning questions
Previous Message Scott Carey 2009-07-30 20:19:58 Re: PostgreSQL 8.4 performance tuning questions