From: | Moritz Onken <onken(at)houseofdesign(dot)de> |
---|---|
To: | |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow query with a lot of data |
Date: | 2008-08-19 15:23:42 |
Message-ID: | 979A6EAC-24DC-4D92-A4B6-F8702221C5EC@houseofdesign.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Am 19.08.2008 um 16:49 schrieb Scott Carey:
> What is your work_mem set to? The default?
>
> Try increasing it significantly if you have the RAM and seeing if
> that affects the explain plan. You may even want to set it to a
> number larger than the RAM you have just to see what happens. In
> all honesty, it may be faster to overflow to OS swap space than sort
> too many rows, but ONLY if it changes the plan to a significantly
> more efficient one.
>
> Simply type
> 'SET work_mem = '500MB';
> before running your explain. Set it to even more RAM if you have
> the space for this experiment.
>
> In my experience the performance of aggregates on large tables is
> significantly affected by work_mem and the optimizer will chosse
> poorly without enough of it. It will rule out plans that may be
> fast enough when overflowing to disk in preference to colossal sized
> sorts (which likely also overflow to disk but take hours or days).
Thanks for that advice but the explain is not different :-(
moritz
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Wong | 2008-08-20 05:23:16 | Software vs. Hardware RAID Data |
Previous Message | Scott Carey | 2008-08-19 14:49:29 | Re: Slow query with a lot of data |