From: | Moritz Onken <onken(at)houseofdesign(dot)de> |
---|---|
To: | |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow query with a lot of data |
Date: | 2008-08-20 07:54:13 |
Message-ID: | 6F57A01F-F83F-477F-B86B-617D0F843FB1@houseofdesign.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Am 19.08.2008 um 17:23 schrieb Moritz Onken:
>
> Am 19.08.2008 um 16:49 schrieb Scott Carey:
>
>> What is your work_mem set to? The default?
>>
>> Try increasing it significantly if you have the RAM and seeing if
>> that affects the explain plan. You may even want to set it to a
>> number larger than the RAM you have just to see what happens. In
>> all honesty, it may be faster to overflow to OS swap space than
>> sort too many rows, but ONLY if it changes the plan to a
>> significantly more efficient one.
>>
>> Simply type
>> 'SET work_mem = '500MB';
>> before running your explain. Set it to even more RAM if you have
>> the space for this experiment.
>>
>> In my experience the performance of aggregates on large tables is
>> significantly affected by work_mem and the optimizer will chosse
>> poorly without enough of it. It will rule out plans that may be
>> fast enough when overflowing to disk in preference to colossal
>> sized sorts (which likely also overflow to disk but take hours or
>> days).
>
> Thanks for that advice but the explain is not different :-(
>
> moritz
>
> --
Hi,
I started the query with work_mem set to 3000MB. The explain output
didn't change but it runs now much faster (about 10 times). The swap
isn't used. How can you explain that?
moritz
From | Date | Subject | |
---|---|---|---|
Next Message | Zoltan Boszormenyi | 2008-08-20 09:10:21 | Re: Slow query with a lot of data |
Previous Message | Tommy Gildseth | 2008-08-20 07:53:37 | Re: Software vs. Hardware RAID Data |