Re: Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE

From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>, Rafia Sabih <rafia(dot)sabih(at)enterprisedb(dot)com>, PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE
Date: 2017-05-30 16:26:17
Message-ID: 20170530162617.ex5lxepgwp3bezpd@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2017-05-30 07:27:12 -0400, Robert Haas wrote:
> The other is that I figured 64k was small enough that nobody would
> care about the memory utilization. I'm not sure we can assume the
> same thing if we make this bigger. It's probably fine to use a 6.4M
> tuple queue for each worker if work_mem is set to something big, but
> maybe not if work_mem is set to the default of 4MB.

Probably not. It might also end up being detrimental performancewise,
because we start touching more memory. I guess it'd make sense to set
it in the planner, based on the size of a) work_mem b) number of
expected tuples.

I do wonder whether the larger size fixes some scheduling issue
(i.e. while some backend is scheduled out, the other side of the queue
can continue), or whether it's largely triggered by fixable contention
inside the queue. I'd guess it's a bit of both. It should be
measurable in some cases, by comparing the amount of time blocking on
reading the queue (or continuing because the queue is empty), writing
to the queue (should always result in blocking) and time spent waiting
for the spinlock.

- Andres

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2017-05-30 16:45:43 Re: "cannot specify finite value after UNBOUNDED" ... uh, why?
Previous Message Michael Paquier 2017-05-30 16:25:58 Re: [HACKERS] Channel binding support for SCRAM-SHA-256