Re: strange parallel query behavior after OOM crashes

From: Neha Khatri <nehakhatri5(at)gmail(dot)com>
To: Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: strange parallel query behavior after OOM crashes
Date: 2017-04-04 06:46:18
Message-ID: CAFO0U+-E8yzchwVnvn5BeRDPgX2z9vZUxQ8dxx9c0XFGBC7N1Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Looking further in this context, number of active parallel workers is:
parallel_register_count - parallel_terminate_count

Can active workers ever be greater than max_parallel_workers, I think no.
Then why should there be greater than check in the following condition:

if (parallel && (BackgroundWorkerData->parallel_register_count -
BackgroundWorkerData->parallel_terminate_count) >= max_parallel_workers)

I feel there should be an assert if
(BackgroundWorkerData->parallel_register_count
- BackgroundWorkerData->parallel_terminate_count) > max_parallel_workers)

And the check could be
if (parallel && (active_parallel_workers == max_parallel_workers))
then do not register new parallel wokers and return.

There should be no tolerance for the case when active_parallel_workers >
max_parallel_workers. After all that is the purpose of max_parallel_workers.

Is it like multiple backends trying to register parallel workers at the
same time, for which the greater than check should be present?

Thoughts?

Regards,
Neha

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message 'Andres Freund' 2017-04-04 06:48:20 Re: Statement timeout behavior in extended queries
Previous Message Tsunakawa, Takayuki 2017-04-04 06:35:00 Re: Statement timeout behavior in extended queries