Re: large tables and simple "= constant" queries using indexes

From: PFC <lists(at)peufeu(dot)com>
To: "John Beaver" <john(dot)e(dot)beaver(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: large tables and simple "= constant" queries using indexes
Date: 2008-04-09 22:31:00
Message-ID: op.t9c7hyd7cigqcu@apollo13.peufeu.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> Hi, I've started my first project with Postgres (after several years of
> using Mysql), and I'm having an odd performance problem that I was
> hoping someone might be able to explain the cause of.
>
> ----My query----
> - select count(*) from gene_prediction_view where gene_ref = 523
> - takes 26 seconds to execute, and returns 2400 (out of a total of
> 15 million records in the table)
> ---My problem---
> Using a single-column index to count 2400 records which are exactly
> one constant value doesn't sound like something that would take 26
> seconds. What's the slowdown? Any silver bullets that might fix this?

* Please post an EXPLAIN ANALYZE of your query which will allow to choose
between these two options :
- If Postgres uses a bad plan (like a seq scan), you need to up the
statistics for this column
- If you get the correct plan (index scan or bitmap index scan) then it
is likely that postgres does one disk seek per row that has to be counted.
26 seconds for 2400 rows would be consistent with a 10ms seek time. The
unmistakable sign is that re-running the query will result in a very fast
runtime (I'd say a couple ms for counting 2400 rows if no disk IO is
involved).

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message samantha mahindrakar 2008-04-09 23:33:46 Re: Performance with temporary table
Previous Message Richard Broersma 2008-04-09 22:17:38 Re: EXPLAIN detail