Re: CPU usage goes to 100%, query seems to ran forever

From: "Andrus" <eetasoft(at)online(dot)ee>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: CPU usage goes to 100%, query seems to ran forever
Date: 2006-04-28 16:19:33
Message-ID: e2tfc4$27b8$
Views: Raw Message | Whole Thread | Download mbox
Lists: pgsql-performance

> Something seems to have truncated your EXPLAIN output, but anyway we
> can see where the problem is:

I copied it from pgAdmin in 640x480 screen resolution in XP
Maybe pgAdmin bug ?

> The planner is expecting to get one row from "dok" passing the filter
> condition, and hence chooses a plan that is suitable for a small number
> of rows ... but in reality there are 7670 rows matching the filter
> condition, and that's what blows the runtime out of the water. (Most of
> the runtime is actually going into 7670 repeated scans of "rid", which
> wouldn't have happened with another plan type.)

I added index

CREATE INDEX rid_toode_idx ON firma2.rid USING btree (toode);

and query start working fast !

> So you need to see about getting that estimate to be more accurate.
> First thing is to make sure that "dok" has been ANALYZEd --- just do it
> by hand.

As I wrote I have autovacuum running. Is'nt this sufficient ?

> I can see at least three things you are doing that are
> unnecessarily destroying the planner's ability to estimate the number of
> matching rows:
> dok.laonr='1'::float8 and
> Since laonr apparently isn't float8, this forces a runtime type
> conversion as well as interfering with statistics use. (The planner
> will have ANALYZE stats about dok.laonr, but the connection to
> dok.laonr::float8 escapes it.) Just write the constant with quotes
> and no type coercion.

I re-wrote it as


this query is automatically generated by VFP and ODBC parameter substitution
which adds those type conversions.
VFP has only float8 type and it probably forces ODBC driver convert numbers
to float8

> POSITION( dok.doktyyp IN 'OSIDVGYKIF')!=0 AND
> This is completely unestimatable given the available statistics, and it
> doesn't look to me like it is all that great a semantic representation
> either. Perhaps the query that's really meant here is "dok.doktypp IN
> ('O','S','I', ...)"? If so, you should say what you mean, not play
> games with converting the query into some strange string operation.

'OSID ...' is a string parameter substituted to SELECT template.
changing this to IN ( 'O', 'S', .. requires re-writing parts of code and I'm
not sure it makes code faster.

> AND dok.kuupaev||dok.kellaaeg BETWEEN '2006-04-08' AND '2006-04-2723 59'

> This is another case where the planner is not going to have any ability
> to make a useful estimate, and it's because you are using a crummy
> representation of your data. You should merge those two columns into
> one timestamp column and just do a simple BETWEEN test.
> By and large, unnatural representations of data that you use in WHERE
> clauses are going to cost you big-time in SQL queries. It's worth
> taking time up front to design a clean table schema, and taking time
> to revise it when requirements change.

date range test in other part of where clause

dok.kuupaev BETWEEN ....

is optimizable.

AND dok.kuupaev||dok.kellaaeg adds time range test to date range.
There are less that some thousands documents per day.

Wasting time to re-engineer database and deployed application seems not
reasonable in this case.


In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Bealach-na Bo 2006-04-28 17:31:30 Re: Why so slow?
Previous Message Erik Myllymaki 2006-04-28 15:37:57 hardare config question