Skip site navigation (1) Skip section navigation (2)

Re: improving my query plan

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Kevin Kempter" <kevink(at)consistentstate(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: improving my query plan
Date: 2009-08-24 14:45:42
Message-ID: 4A9261460200002500029FE2@gw.wicourts.gov (view raw or flat)
Thread:
Lists: pgsql-performance
Kevin Kempter <kevink(at)consistentstate(dot)com> wrote: 
 
> I have a simple query against two very large tables ( > 800million
> rows in theurl_hits_category_jt table and 9.2 million  in the
> url_hits_klk1 table )
 
> I get a very high overall query cost:
 
>  Hash Join  (cost=296959.90..126526916.55 rows=441764338 width=8)
 
Well, the cost is an abstraction which, if you haven't configured it
otherwise, equals the estimated time to return a tuple in a sequential
scan.  This plan is taking advantage of memory to join these two large
tables and return 441 million result rows in the time it would take to
read 126 million rows.  That doesn't sound like an unreasonable
estimate to me.
 
Did you think there should be a faster plan for this query, or is the
large number for the estimated cost worrying you?
 
-Kevin

In response to

pgsql-performance by date

Next:From: Gavin LoveDate: 2009-08-24 16:27:49
Subject: Indexing on a circle datatype
Previous:From: Tom LaneDate: 2009-08-24 14:37:04
Subject: Re: postgresql uses Hash-join, i need Nested-loop

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group