On Sat, Feb 28, 2009 at 11:20 AM, Alexander Staubo <alex(at)bengler(dot)no> wrote:
> On Fri, Feb 27, 2009 at 11:54 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> The problem here is that the planner estimates the cost of a Limit
>> plan node by adding up (1) the startup cost of the underlying plan
>> node, in this case 0 for the nestjoin, and (2) a percentage of the run
>> cost, based on the ratio of the number of rows expected to be returned
>> to the total number of rows. In this case, the nested loop is
>> expected to return 6944 rows, so it figures it won't have to get very
>> far to find the 4 you requested.
>> I will think about this some more but nothing is occurring to me off
>> the top of my head.
> Thanks for explaining. Is there any way to rewrite the query in a way
> that will avoid the nested loop join -- other than actually disabling
> nested loop joins? If I do the latter, the resulting query uses a hash
> join and completes in 80-100 ms, which is still pretty horrible,
> especially for a query that returns nothing, but extremely auspicious
> compared to the unthinkable 4-5 seconds for the current query.
Can you post the schema for the two tables in question? Feel free to
omit any columns that aren't included in the query, but make sure to
include any unique indices, etc.
What do you have default_statistics_target set to? If it's less than
100, you should probably raise it to 100 and re-analyze (the default
value for 8.4 will be 100, but for 8.3 and prior it is 10).
What is the approximate total number of rows in each of these two
tables? Of the rows in section_items, how many have subject_type =
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2009-03-01 17:05:51|
|Subject: Re: "slow" queries |
|Previous:||From: Robert Haas||Date: 2009-03-01 03:15:17|
|Subject: Re: "slow" queries|