Re: Slow query with a lot of data

From: Zoltan Boszormenyi <zb(at)cybertec(dot)at>
To: Moritz Onken <onken(at)houseofdesign(dot)de>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow query with a lot of data
Date: 2008-08-20 09:10:21
Message-ID: 48ABDF7D.3040102@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Moritz Onken írta:
>
> Am 19.08.2008 um 17:23 schrieb Moritz Onken:
>
>>
>> Am 19.08.2008 um 16:49 schrieb Scott Carey:
>>
>>> What is your work_mem set to? The default?
>>>
>>> Try increasing it significantly if you have the RAM and seeing if
>>> that affects the explain plan. You may even want to set it to a
>>> number larger than the RAM you have just to see what happens. In
>>> all honesty, it may be faster to overflow to OS swap space than sort
>>> too many rows, but ONLY if it changes the plan to a significantly
>>> more efficient one.
>>>
>>> Simply type
>>> 'SET work_mem = '500MB';
>>> before running your explain. Set it to even more RAM if you have
>>> the space for this experiment.
>>>
>>> In my experience the performance of aggregates on large tables is
>>> significantly affected by work_mem and the optimizer will chosse
>>> poorly without enough of it. It will rule out plans that may be
>>> fast enough when overflowing to disk in preference to colossal sized
>>> sorts (which likely also overflow to disk but take hours or days).
>>
>> Thanks for that advice but the explain is not different :-(
>>
>> moritz
>>
>> --
>
> Hi,
>
> I started the query with work_mem set to 3000MB. The explain output
> didn't change but it runs now much faster (about 10 times). The swap
> isn't used. How can you explain that?

$ cat /proc/sys/vm/overcommit_memory
0
$ less linux/Documentation/filesystems/proc.txt
...
overcommit_memory
-----------------

Controls overcommit of system memory, possibly allowing processes
to allocate (but not use) more memory than is actually available.

0 - Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. root is allowed to
allocate slightly more memory in this mode. This is the
default.

1 - Always overcommit. Appropriate for some scientific
applications.

2 - Don't overcommit. The total address space commit
for the system is not permitted to exceed swap plus a
configurable percentage (default is 50) of physical RAM.
Depending on the percentage you use, in most situations
this means a process will not be killed while attempting
to use already-allocated memory but will receive errors
on memory allocation as appropriate.
...

I guess you are running on 64-bit because "obvious overcommit" exceeds
3GB already.
Or you're running 32-bit and overcommit_memory=1 on your system.

Best regards,
Zoltán Böszörményi

--
----------------------------------
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Sergey Hripchenko 2008-08-20 09:17:24 pgsql do not handle NULL constants in the view
Previous Message Moritz Onken 2008-08-20 07:54:13 Re: Slow query with a lot of data