Re: Query performance problem

From: Richard Huxton <dev(at)archonet(dot)com>
To: weberp(at)paradise(dot)net(dot)nz
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Query performance problem
Date: 2005-03-17 10:03:54
Message-ID: 4239560A.50707@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Phil Daintree wrote:
>
> There are 2 tables used in the sql we need to optimise .....
>
> CREATE TABLE chartdetails (
> CREATE TABLE gltrans (

> So there is a chartdetail record for every period for every general ledger
> account. So if there are 5 years x 12 periods (months) and 200 general
> ledger accounts this table will be 12,000 records.
>
> There is a gltrans record for every side of a journal entry. This can get to
> be quite a significant table - easily more than 200,000 per annum - depending
> on the size of the business obviously.
>
>
> Can anyone tell me why does the following code chokes ... literally - this
> works almost invisbly under mysql - pg takes more than an hour even on a very
> small 30 record database.

There's nothing obvious in the code below. You could probably do it all
with one (more complex) sql query (maybe two), but it shouldn't be too
bad as it stands.

The code doesn't seem to use gltrans at all though (unless I'm missing
something). Is there something missing?

Anyway, the first step will be to get some accurate figures. Can you
turn statement logging on in postgresql.conf and also timestamps (unless
it's logging to syslogd, in which case you'll get them automatically).
That way we can identify exactly where the time is being spent.

--
Richard Huxton
Archonet Ltd

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Marco Colombo 2005-03-17 12:03:36 Re: plpython function problem workaround
Previous Message Alban Hertroys 2005-03-17 09:51:53 Re: help with plpgsql function called by trigger