Re: Help Me Understand Why I'm Getting a Bad Query Plan

From: Bryan Murphy <bmurphy1976(at)gmail(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Help Me Understand Why I'm Getting a Bad Query Plan
Date: 2009-03-25 03:43:37
Message-ID: 7fd310d10903242043x28d24adenb729c0a77a21ce9@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> For some reason, your first post didn't make it to the list, which is why
> nobody responded.

Weird... I've been having problems with gmail and google reader all week.

>>> I've got a query on our production system that isn't choosing a good
>>> plan.  I can't see why it's choosing to do a sequential scan on the
>>> ItemExperienceLog table.  That table is about 800mb and has about 2.5
>>> million records.  This example query only returns 4 records.  I've
>>> tried upping the statics for ItemExperienceLog.VistorId and
>>> ItemExperienceLog.ItemId to 1000 (from out default of 100) with no
>>> success.
>
> Yes, that is kind of inexplicable.  For some reason, it's assigning a very
> high cost to the nestloops, which is why it wants to avoid them with a seq
> scan.  Can you try lowering cpu_index_cost to 0.001 and see how that affects
> the plan?

I'm assuming you meant cpu_index_tuple_cost. I changed that to 0.001
as you suggested, forced postgres to reload it's configuration and I'm
still getting the same execution plan.

Looking through our configuration one more time, I see that at some
point I set random_page_cost to 2.0, but I don't see any other changes
to query planner settings from their default values.

Bryan

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Bryan Murphy 2009-03-25 03:47:00 Re: Help Me Understand Why I'm Getting a Bad Query Plan
Previous Message Greg Smith 2009-03-25 03:29:17 Re: Raid 10 chunksize