Re: Sort performance cliff with small work_mem

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Sort performance cliff with small work_mem
Date: 2018-05-02 17:48:37
Message-ID: CAH2-Wzm8yJamoxq6rby+h68jzqX+4hzX-L6cvMhu=sQycSVuMg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, May 2, 2018 at 10:43 AM, Heikki Linnakangas <hlinnaka(at)iki(dot)fi> wrote:
> Independently of this, perhaps we should put in special case in
> dumptuples(), so that it would never create a run with fewer than maxTapes
> tuples. The rationale is that you'll need to hold that many tuples in memory
> during the merge phase anyway, so it seems silly to bail out before that
> while building the initial runs. You're going to exceed work_mem by the
> roughly same amount anyway, just in a different phase. That's not the case
> in this example, but it might happen when sorting extremely wide tuples.

-1 from me. What about the case where only some tuples are massive?

--
Peter Geoghegan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2018-05-02 17:56:34 Re: Sort performance cliff with small work_mem
Previous Message Heikki Linnakangas 2018-05-02 17:43:44 Re: Sort performance cliff with small work_mem