Re: Sort performance cliff with small work_mem

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Sort performance cliff with small work_mem
Date: 2018-05-02 18:12:15
Message-ID: CAH2-WzmtAuVzTkWEv-_W4+E063S-q-iErW_tR02qnDpo4qfwKw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, May 2, 2018 at 11:06 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> -1 from me. What about the case where only some tuples are massive?
>
> Well, what about it? If there are just a few wide tuples, then the peak
> memory consumption will depend on how many of those happen to be in memory
> at the same time ... but we have zero control over that in the merge
> phase, so why sweat about it here? I think Heikki's got a good idea about
> setting a lower bound on the number of tuples we'll hold in memory during
> run creation.

We don't have control over it, but I'm not excited about specifically
going out of our way to always use more memory in dumptuples() because
it's no worse than the worst case for merging. I am supportive of the
idea of making sure that the amount of memory left over for tuples is
reasonably in line with memtupsize at the point that the sort starts,
though.

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Vladimir Sitnikov 2018-05-02 19:06:54 Re: [HACKERS] Clock with Adaptive Replacement
Previous Message Tom Lane 2018-05-02 18:06:59 Re: Sort performance cliff with small work_mem