Re: Performance query about large tables, lots of concurrent access

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Karl Wright" <kwright(at)metacarta(dot)com>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance query about large tables, lots of concurrent access
Date: 2007-06-19 15:46:20
Message-ID: 873b0o7xrn.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


"Karl Wright" <kwright(at)metacarta(dot)com> writes:

>> In this case it looks like the planner is afraid that that's exactly
>> what will happen --- a cost of 14177 suggests that several thousand row
>> fetches are expected to happen, and yet it's only predicting 5 rows out
>> after the filter. It's using this plan anyway because it has no better
>> alternative, but you should think about whether a different index
>> definition would help.

Another index won't help if the reason the cost is so high isn't because the
index isn't very selective but because there are lots of dead tuples.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2007-06-19 15:48:31 Re: Performance query about large tables, lots of concurrent access
Previous Message Francisco Reyes 2007-06-19 15:45:49 Re: Performance query about large tables, lots of concurrent access