Re: Planner issue on sorting joining of two tables with limit

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Alexander Korotkov" <aekorotkov(at)gmail(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>,"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: Planner issue on sorting joining of two tables with limit
Date: 2010-05-07 15:27:02
Message-ID: 4BE3EAF60200002500031399@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Alexander Korotkov <aekorotkov(at)gmail(dot)com> wrote:
> Alexander Korotkov <aekorotkov(at)gmail(dot)com> wrote:

>>> Well, no, because that plan wouldn't produce the specified
>>> ordering; or at least it would be a lucky coincidence if it did.
>>> It's only sorting on t1.value.
>>>
>> I just don't find why it is coincidence. I think that such plan
>> will always produce result ordered by two columns, because such
>> nested index scan always produce this result.

Assuming a nested index scan, or any particular plan, is unwise.
New data or just the "luck of the draw" on your next ANALYZE could
result in a totally different plan which wouldn't produce the same
ordering unless specified.

> I found my mistake. My supposition is working only if value column
> in t1 table is unique. But if I replace the index by unique one
> then plan is the same.

Yeah, maybe, for the moment. When you have ten times the quantity
of data, a completely different plan may be chosen. If you want a
particular order, ask for it. The planner will even take the
requested ordering into account when choosing a plan, so the cutoff
for switching to an in-memory hash table or a bitmap index scan
might shift a bit based on the calculated cost of sorting data.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2010-05-07 15:35:43 Re: Planner issue on sorting joining of two tables with limit
Previous Message Mark Stosberg 2010-05-07 14:10:31 Re: debugging handle exhaustion and 15 min/ 5mil row delete