Skip site navigation (1) Skip section navigation (2)

Re: Optimizing a huge_table/tiny_table join

From: Mark Kirkwood <markir(at)paradise(dot)net(dot)nz>
To: kynn(at)panix(dot)com
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Optimizing a huge_table/tiny_table join
Date: 2006-05-25 23:27:09
Message-ID: 44763D4D.6090305@paradise.net.nz (view raw or flat)
Thread:
Lists: pgsql-performance
Tom Lane wrote:
> <kynn(at)panix(dot)com> writes:
>>  Limit  (cost=19676.75..21327.99 rows=6000 width=84)
>>    ->  Hash Join  (cost=19676.75..1062244.81 rows=3788315 width=84)
>>          Hash Cond: (upper(("outer".id)::text) = upper(("inner".id)::text))
>>          ->  Seq Scan on huge_table h  (cost=0.00..51292.43 rows=2525543 width=46)
>>          ->  Hash  (cost=19676.00..19676.00 rows=300 width=38)
>>                ->  Seq Scan on tiny_table t  (cost=0.00..19676.00 rows=300 width=38)
> 
> Um, if huge_table is so much bigger than tiny_table, why are the cost
> estimates for seqscanning them only about 2.5x different?  There's
> something wacko about your statistics, methinks.
> 

This suggests that tiny_table is very wide (i.e a lot of columns 
compared to huge_table), or else has thousands of dead tuples.

Do you want to post the descriptions for these tables?

If you are running 8.1.x, then the output of 'ANALYZE VERBOSE 
tiny_table' is of interest too.

If you are running a pre-8.1 release, then lets see 'VACUUM VERBOSE 
tiny_table'.

Note that after either of these, your plans may be altered (as ANALYZE 
will recompute your stats for tiny_table, and VACUUM may truncate pages 
full of dead tuples at the end of it)!


In response to

pgsql-performance by date

Next:From: Christopher Kings-LynneDate: 2006-05-26 01:32:56
Subject: Re: lowering priority automatically at connection
Previous:From: Tom LaneDate: 2006-05-25 22:30:34
Subject: Re: is it possible to make this faster?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group