Re: Optimising a query

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Richard Huxton" <dev(at)archonet(dot)com>
Cc: "Paul Lambert" <paul(dot)lambert(at)reynolds(dot)com(dot)au>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Optimising a query
Date: 2007-12-19 09:47:24
Message-ID: 87wsrbhv83.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Richard Huxton" <dev(at)archonet(dot)com> writes:

> Paul Lambert wrote:
>
>> " -> Sort (cost=30197.98..30714.85 rows=206748 width=16) (actual >> time=5949.691..7018.931 rows=206748 loops=1)"
>> " Sort Key: dealer_id, year_id, subledger_id, account_id"
>> " Sort Method: external merge Disk: 8880kB"

> Before that though, try issuing a "SET work_mem = '9MB'" before running your
> query. If that doesn't change the plan step up gradually. You should be able to
> get the sort stage to happen in RAM rather than on disk (see "Sort Method"
> above).

FWIW you'll probably need more than that. Try something more like 20MB.

Also, note you can change this with SET for just this connection and even just
this query and then reset it to the normal value (or use SET LOCAL). You don't
have to change it in the config file and restart the whole server.

Also, try replacing the DISTINCT with GROUP BY. The code path for DISTINCT
unfortunately needs a bit of cleaning up and isn't exactly equivalent to GROUP
BY. In particular it doesn't support hash aggregates which, if your work_mem
is large enough, might work for you here.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Gregory Stark 2007-12-19 09:50:29 Re: Optimising a query
Previous Message Richard Huxton 2007-12-19 08:48:00 Re: Optimising a query