Re: Performance query about large tables, lots of concurrent access

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Karl Wright" <kwright(at)metacarta(dot)com>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance query about large tables, lots of concurrent access
Date: 2007-06-19 16:12:05
Message-ID: 87y7if7wkq.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Gregory Stark" <stark(at)enterprisedb(dot)com> writes:

> "Karl Wright" <kwright(at)metacarta(dot)com> writes:
>
>>> In this case it looks like the planner is afraid that that's exactly
>>> what will happen --- a cost of 14177 suggests that several thousand row
>>> fetches are expected to happen, and yet it's only predicting 5 rows out
>>> after the filter. It's using this plan anyway because it has no better
>>> alternative, but you should think about whether a different index
>>> definition would help.
>
> Another index won't help if the reason the cost is so high isn't because the
> index isn't very selective but because there are lots of dead tuples.

Sorry, I didn't mean to say that was definitely the case, only that having
bloated tables with lots of dead index pointers could have similar symptoms
because the query still has to follow all those index pointers.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2007-06-19 16:20:10 Re: Regarding Timezone
Previous Message Tom Lane 2007-06-19 16:04:19 Re: Maintenance question / DB size anomaly...