Re: BUG in 10.1 - dsa_area could not attach to a segment that has been freed

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Alexander Voytsekhovskyy <young(dot)inbox(at)gmail(dot)com>, PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org>
Subject: Re: BUG in 10.1 - dsa_area could not attach to a segment that has been freed
Date: 2017-11-29 01:17:55
Message-ID: CAEepm=0VZbBmRD33uiTS5mM4c1tf7T_aH3tOb3isbXW+b9OrTQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Wed, Nov 29, 2017 at 1:33 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Why not? Can't it just be that the workers are slow getting started?

In the normal non-error control flow, don't we expect
ExecShutdownGather() to run ExecParallelFinish() before
ExecParallelCleanup(), meaning that the leader waits for workers to
finish completely before it detaches itself? Doesn't that need to be
case to avoid random "unable to map dynamic shared memory segment" and
"dsa_area could not attach to a segment that has been freed" errors,
and for the parallel instrumentation shown in EXPLAIN to be reliable?

Could it be that the leader thought that a worker didn't start up, but
in fact it did?

--
Thomas Munro
http://www.enterprisedb.com

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Michael Paquier 2017-11-29 04:57:48 Re: [HACKERS] [BUGS] Bug in Physical Replication Slots (at least 9.5)?
Previous Message Robert Haas 2017-11-29 00:33:30 Re: BUG in 10.1 - dsa_area could not attach to a segment that has been freed