Re: Optimizing a huge_table/tiny_table join

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: kynn(at)panix(dot)com
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Optimizing a huge_table/tiny_table join
Date: 2006-05-25 01:41:59
Message-ID: 7082.1148521319@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

<kynn(at)panix(dot)com> writes:
> Limit (cost=19676.75..21327.99 rows=6000 width=84)
> -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)
> Hash Cond: (upper(("outer".id)::text) = upper(("inner".id)::text))
> -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543 width=46)
> -> Hash (cost=19676.00..19676.00 rows=300 width=38)
> -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)

Um, if huge_table is so much bigger than tiny_table, why are the cost
estimates for seqscanning them only about 2.5x different? There's
something wacko about your statistics, methinks.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Ian Westmacott 2006-05-25 03:20:24 Re: Getting even more insert performance (250m+rows/day)
Previous Message Joshua D. Drake 2006-05-25 01:31:56 Re: Optimizing a huge_table/tiny_table join