Re: crashes due to setting max_parallel_workers=0

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: crashes due to setting max_parallel_workers=0
Date: 2017-03-27 15:57:35
Message-ID: CA+TgmoavXPCAHhg0TsuLrH4_meeZxp2Rw7jTmBBnw9D2pfHi2g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Mar 25, 2017 at 12:18 PM, Rushabh Lathia
<rushabh(dot)lathia(at)gmail(dot)com> wrote:
> About the original issue reported by Tomas, I did more debugging and
> found that - problem was gather_merge_clear_slots() was not returning
> the clear slot when nreader is zero (means nworkers_launched = 0).
> Due to the same scan was continue even all the tuple are exhausted,
> and then end up with server crash at gather_merge_getnext(). In the patch
> I also added the Assert into gather_merge_getnext(), about the index
> should be less then the nreaders + 1 (leader).

Well, you and David Rowley seem to disagree on what the fix is here.
His patches posted upthread do A, and yours do B, and from a quick
look those things are not just different ways of spelling the same
underlying fix, but actually directly conflicting ideas about what the
fix should be. Any chance you can review his patches, and maybe he
can review yours, and we could try to agree on a consensus position?
:-)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2017-03-27 15:59:00 Re: logical replication worker and statistics
Previous Message Jeff Janes 2017-03-27 15:56:55 Re: segfault in hot standby for hash indexes