Re: Adding additional index causes 20,000x slowdown for certain select queries - postgres 9.0.3

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Timothy Garnett" <tgarnett(at)panjiva(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Adding additional index causes 20,000x slowdown for certain select queries - postgres 9.0.3
Date: 2011-03-16 16:40:50
Message-ID: 4D80A1C2020000250003B99D@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Timothy Garnett <tgarnett(at)panjiva(dot)com> wrote:

>>> -> Index Scan Backward using
>>> index_customs_records_on_month_and_bl_number on customs_records
>>> (cost=0.00..78426750.74 rows=48623 width=908) (actual
>>> time=171344.182..3858893.588 rows=100 loops=1)

We've seen a lot of those lately -- Index Scan Backward performing
far worse than alternatives. One part of it is that disk sectors
are arranged for optimal performance on forward scans; but I don't
think we've properly accounted for the higher cost of moving
backward through our btree indexes, either. To quote from the
README for the btree AM:

| A backwards scan has one additional bit of complexity: after
| following the left-link we must account for the possibility that
| the left sibling page got split before we could read it. So, we
| have to move right until we find a page whose right-link matches
| the page we came from. (Actually, it's even harder than that; see
| deletion discussion below.)

I'm wondering whether the planner should have some multiplier or
other adjustment to attempt to approximate the known higher cost of
backward scans.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2011-03-16 17:01:31 Re: Re: Adding additional index causes 20, 000x slowdown for certain select queries - postgres 9.0.3
Previous Message Kenneth Marshall 2011-03-16 16:36:53 Re: Help with Query Tuning