Skip site navigation (1) Skip section navigation (2)

Re: Rewriting DISTINCT and losing performance

From: Richard Huxton <dev(at)archonet(dot)com>
To: pgsql-performance(at)nullmx(dot)com
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Rewriting DISTINCT and losing performance
Date: 2007-05-21 17:34:11
Message-ID: 4651D813.5050100@archonet.com (view raw or flat)
Thread:
Lists: pgsql-performance
Chuck D. wrote:
> On Monday 21 May 2007 03:14, Josh Berkus wrote:
>> Chuck,
>>
>> Can we see the plan?
>>
>> --Josh
>>
> 
> Sorry Josh,  I guess I could have just used EXPLAIN instead of EXPLAIN 
> ANALYZE.
> 
> # explain
> SELECT   country_id, country_name
> FROM     geo.country
> WHERE country_id IN
>  (select country_id FROM geo.city)
> ;
>                              QUERY PLAN
> --------------------------------------------------------------------
>  Nested Loop IN Join  (cost=0.00..1252.60 rows=155 width=15)
>    Join Filter: (country.country_id = city.country_id)
>    ->  Seq Scan on country  (cost=0.00..6.44 rows=244 width=15)
>    ->  Seq Scan on city  (cost=0.00..43409.12 rows=2122712 width=2)

The only thing I can think of is that the CLUSTERing on city.country_id 
makes the system think it'll be cheaper to seq-scan the whole table.

I take it you have got 2 million rows in "city"?
-- 
   Richard Huxton
   Archonet Ltd

In response to

Responses

pgsql-performance by date

Next:From: Chuck D.Date: 2007-05-21 18:17:44
Subject: Re: Rewriting DISTINCT and losing performance
Previous:From: Jim C. NasbyDate: 2007-05-21 17:24:06
Subject: Re: pg_stats how-to?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group