Re: Finding uniques across a big join

From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: "John D(dot) Burger" <john(at)mitre(dot)org>
Cc: PostgreSQL general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Finding uniques across a big join
Date: 2005-11-30 06:55:55
Message-ID: 20051130065555.GE23691@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Nov 29, 2005 at 09:58:49PM -0500, John D. Burger wrote:
> I could use some help with the following:
>
> I have a database of geographic entities with attributes spread across
> several tables. I need to determine which entities are unique with
> respect to some of those attributes. I'm using the following query:

<snip>

If you put the gazPlaceID as a result of the uniqs subquery, that would
avoid the second lookup, right? Whether it's much faster is the
question. So something like:

select p1.gazPlaceID
from gazPlaces as p1
join gazNamings as n1 using (gazPlaceID)
join gazContainers as c1 using (gazPlaceID)
group by p1.gazPlaceID, p1.featureType, n1.placeNameID, c1.containerID
having count(*) = 1

Secondly, what does the plan look like? Is it materialising or sorting
at any stage?

Finally, what version of postgres?

Have a nice day,
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message hubert depesz lubaczewski 2005-11-30 07:29:48 Re: memory leak under heavy load?
Previous Message Michael Fuhr 2005-11-30 04:37:39 Re: selecting a attribute from a function