Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)
Date: 2018-01-11 21:44:52
Message-ID: CA+TgmoYjtFyd+hbGdYK1kszPorv8LExgwgRMP=rxCLJ2qXg6dw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jan 11, 2018 at 3:25 PM, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> On Thu, Jan 11, 2018 at 12:06 PM, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
>> It might make sense to have the "minimum memory per participant" value
>> come from a GUC, rather than be hard coded (it's currently hard-coded
>> to 32MB).
>
>> What do you think of that idea?
>
> A third option here is to specifically recognize that
> compute_parallel_worker() returned a value based on the table storage
> param max_workers, and for that reason alone no "insufficient memory
> per participant" decrementing/vetoing should take place. That is, when
> the max_workers param is set, perhaps it should be completely
> impossible for CREATE INDEX to ignore it for any reason other than an
> inability to launch parallel workers (though that could be due to the
> max_parallel_workers GUC's setting).
>
> You could argue that we should do this anyway, I suppose.

Yes, I think this sounds like a good idea.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2018-01-11 22:01:55 Re: [HACKERS] taking stdbool.h into use
Previous Message Alvaro Herrera 2018-01-11 21:04:12 Re: unique indexes on partitioned tables