Re: Very slow query

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net>, PgSQL Novice ML <pgsql-novice(at)postgresql(dot)org>
Subject: Re: Very slow query
Date: 2002-07-30 22:59:56
Message-ID: 200207301559.56037.josh@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Ron,

> A couple of months ago, I asked a similar question, when I saw
> that that a COUNT(*) was scanning, even though it had a perfect
> supporting index. This is the paraphrased answer:
> The aggregate functions module is a complicated black box that
> the developers are scared to look at and break. Besides, I
> hardly ever use COUNT(*), so you don't need it either.

That's the price we pay for having the ability to create custom aggregates.
Since it's possible for me in Postgres to make an aggregate called
comma_cat(varchar) that concatinates a varchar column into a comma-delimited
list -- for example -- the parser cannot optimize for what goes on inside the
aggregate.

The hackers list has discussed the possibility of writing parser optimization
just for the built-in aggregates for which aggregation is reasonable (COUNT,
MIN, MAX). However, nobody who cares enough about the issue has stepped up
to the plate to offer their code. And it would require *extensive* testing.

Its a priority thing. Sometimes I'm annoyed that MAX() is slow in postgres.
More times I'm annoyed that MSSQL does not support custom aggregates.

And, this doesn't affect my advice to Chad. Indexing phonenum_substr *will*
speed up the query by speeding up the JOIN portion.

--
-Josh Berkus
Aglio Database Solutions
San Francisco

In response to

Browse pgsql-novice by date

  From Date Subject
Next Message Henshall, Stuart - WCP 2002-07-31 08:31:45 Re: delete other similar entries with timestamp <= times
Previous Message Ron Johnson 2002-07-30 21:30:15 Re: Very slow query