Skip site navigation (1) Skip section navigation (2)

Re: Performance of pg_dump on PGSQL 8.0

From: "John Vincent" <pgsql-performance(at)lusis(dot)org>
To: "Scott Marlowe" <smarlowe(at)g2switchworks(dot)com>
Cc: "PGSQL Performance" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance of pg_dump on PGSQL 8.0
Date: 2006-06-14 17:04:31
Message-ID: c841561b0606141004w62d584ake46d598ccb1a4b22@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On 6/14/06, Scott Marlowe <smarlowe(at)g2switchworks(dot)com> wrote:
>
> On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
> > -- this is the third time I've tried sending this and I never saw it get
> > through to the list. Sorry if multiple copies show up.
> >
> > Hi all,
>
> BUNCHES SNIPPED
>
> > work_mem = 1048576 ( I know this is high but you should see some of our
> > sorts and aggregates)
>
> Ummm.  That's REALLY high.  You might want to consider lowering the
> global value here, and then crank it up on a case by case basis, like
> during nighttime report generation.  Just one or two queries could
> theoretically run your machine out of memory right now.  Just put a "set
> work_mem=1000000" in your script before the big query runs.



I know it is but that's what we need for some of our queries. Our ETL tool
(informatica) and BI tool (actuate) won't let us set those things as part of
our jobs. We need it for those purposes. We have some really nasty queries
that will be fixed in our new server.

E.G. we have a table called loan_account_agg_fact that has 200+ million rows
and it contains every possible combination of late status for a customer
account (i.e. 1 day late, 2 day late, 3 day late) so it gets inserted for
new customers but updated for existing records as part of our warehouse
load. Part of the new layout is combining late ranges so instead of number
of days we have a range of days (i.e. 1-15,16-30....). Even with work_mem
that large, the load of that loan_account_agg_fact table creates over 3GB of
temp tables!


> That's exactly what we do.  We just do a normal backup, and have a
> script that gzips anything in the backup directory that doesn't end in
> .gz...  If you've got space to burn, as you say, then use it at least a
> few days to see how it affects backup speeds.
>
> Seeing as how you're CPU bound, most likely the problem is just the
> compressed backup.
>

I'm starting to think the same thing. I'll see how this COPY I'm doing of
the single largest table does right now and make some judgement based on
that.

-- 
John E. Vincent

In response to

Responses

pgsql-performance by date

Next:From: Mischa SandbergDate: 2006-06-14 17:06:20
Subject: Re: Solaris shared_buffers anomaly?
Previous:From: Scott MarloweDate: 2006-06-14 17:04:18
Subject: Re: Which processor runs better for Postgresql?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group