Re: Parallel threads in query

From: Andres Freund <andres(at)anarazel(dot)de>
To: Paul Ramsey <pramsey(at)cleverelephant(dot)ca>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Darafei Praliaskouski <me(at)komzpa(dot)net>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Parallel threads in query
Date: 2018-11-01 17:15:27
Message-ID: 20181101171527.eyfammzxb3qtqxc3@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2018-11-01 10:10:33 -0700, Paul Ramsey wrote:
> On Wed, Oct 31, 2018 at 2:11 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> > =?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me(at)komzpa(dot)net>
> > writes:
> > > Question is, what's the best policy to allocate cores so we can play nice
> > > with rest of postgres?
> >
>
>
> > There is not, because we do not use or support multiple threads inside
> > a Postgres backend, and have no intention of doing so any time soon.
> >
>
> As a practical matter though, if we're multi-threading a heavy PostGIS
> function, presumably simply grabbing *every* core is not a recommended or
> friendly practice. My finger-in-the-wind guess would be that the value
> of max_parallel_workers_per_gather would be the most reasonable value to
> use to limit the number of cores a parallel PostGIS function should use.
> Does that make sense?

I'm not sure that's a good approximation. Postgres' infrastructure
prevents every query from using max_parallel_workers_per_gather
processes due to the global max_worker_processes limit. I think you
probably would want something very very roughly approximating a global
limit - otherwise you'll either need to set the per-process limit way
too low, or overwhelm machines with context switches.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2018-11-01 17:41:33 Re: INSTALL file
Previous Message Paul Ramsey 2018-11-01 17:10:33 Re: Parallel threads in query