From: | "Ron Mayer" <ron(at)intervideo(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Tatsuo Ishii" <t-ishii(at)sra(dot)co(dot)jp> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: optimizer cost calculation problem |
Date: | 2003-04-01 02:06:47 |
Message-ID: | POEDIPIPKGJJLDNIEMBECEAFCIAA.ron@intervideo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom wrote:
>
>I find it really really hard to believe that it's wise to run with
>sort_mem exceeding 2 gig ;-). Does that installation have so much
>RAM that it can afford to run multiple many-Gb sorts concurrently?
I don't do 2 gig... but I found 0.3 gig helped on a not-too-large system.
In a nightly load of a datawarehousing application I have
a number of places where I do:
set sort_mem=300000;
create table new_whatevers as select distinct whatever from import_table;
set sort_mem=10000;
and
set sort_mem=100000;
select count(*),b from a group by b;
set sort_mem=10000;
when the optimizer likes hash aggregates.
It significantly (8 hours -> 6 hours) reduces the nightly processing
of my log-file analysis database. If my modest system benefited
from 1/3 gig, it wouldn't surprise me if a large system benefits
from 2 gig.
If more info's useful, I'd be happy to provide some.
Ron
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Gemmell | 2003-04-01 05:17:06 | gmake does not finish on default slackware 9 (or 8.1) install |
Previous Message | Joe Conway | 2003-04-01 02:04:23 | Re: GROUP BY + join regression in 7.3 |