From: | Rodrigo De León <rdeleonp(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Optimising SELECT on a table with one million rows |
Date: | 2007-07-30 17:19:14 |
Message-ID: | 1185815954.742564.119430@d55g2000hsg.googlegroups.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Jul 30, 12:01 pm, cultural_sublimat(dot)(dot)(dot)(at)yahoo(dot)com (Cultural
Sublimation) wrote:
> Hash Join (cost=28.50..21889.09 rows=988 width=14) (actual
> time=3.674..1144.779 rows=1000 loops=1)
> Hash Cond: ((comments.comment_author)::integer = (users.user_id)::integer)
> -> Seq Scan on comments (cost=0.00..21847.00 rows=988 width=8) (actual
> time=0.185..1136.067 rows=1000 loops=1)
> Filter: ((comment_story)::integer = 100)
> -> Hash (cost=16.00..16.00 rows=1000 width=14) (actual time=3.425..3.425
> rows=1000 loops=1)
> -> Seq Scan on users (cost=0.00..16.00 rows=1000 width=14) (actual
> time=0.068..1.845 rows=1000 loops=1)
> Total runtime: 1146.424 ms
Create an index on comments.comment_story column.
From | Date | Subject | |
---|---|---|---|
Next Message | Nis Jørgensen | 2007-07-30 17:25:22 | Re: Optimising SELECT on a table with one million rows |
Previous Message | Richard Huxton | 2007-07-30 17:16:54 | Re: Optimising SELECT on a table with one million rows |