Re: Why LIMIT after scanning the table?

From: Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com>
To: "Jim C(dot) Nasby" <jim(at)nasby(dot)net>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Why LIMIT after scanning the table?
Date: 2003-04-30 14:19:41
Message-ID: 20030430071605.X81999-100000@megazone23.bigpanda.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, 30 Apr 2003, Jim C. Nasby wrote:

> I'm doing something where I just need to know if we have more than 100
> rows in a table. Not wanting to scan the whole table, I thought I'd get
> cute...
>
> explain select count(*)
> FROM (SELECT * FROM email_rank WHERE project_id = :ProjectID LIMIT 100) AS t1;
> QUERY PLAN
> -------------------------------------------------------------------------------------
> Aggregate (cost=111.32..111.32 rows=1 width=48)
> -> Subquery Scan t1 (cost=0.00..111.07 rows=100 width=48)
> -> Limit (cost=0.00..111.07 rows=100 width=48)
> -> Seq Scan on email_rank (cost=0.00..76017.40 rows=68439 width=48)
> Filter: (project_id = 24)
>
> The idea is that the inner-most query would only read the first 100 rows
> it finds, then stop. Instead, if explain is to be believed (and speed
> testing seems to indicate it's accurate), we'll read the entire table,
> *then* pick the first 100 rows. Why is that?

I'd suggest looking at explain analyze rather than explain. In most cases
I've seen what it'll actually grab is limit+1 rows (I think cvs will only
grab limit) in the actual rows. It shows you the full count for the
sequence scan in explain, but notice that the limit cost is lower than
that of the sequence scan.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2003-04-30 14:22:25 Re: Why LIMIT after scanning the table?
Previous Message Manfred Koizar 2003-04-30 14:14:46 Re: More tablescanning fun