Re: Expose lock group leader pid in pg_stat_activity

From: Julien Rouhaud <rjuju123(at)gmail(dot)com>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Sergei Kornilov <sk(at)zsrv(dot)org>, Guillaume Lelarge <guillaume(at)lelarge(dot)info>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Expose lock group leader pid in pg_stat_activity
Date: 2020-01-17 15:48:53
Message-ID: CAOBaU_aRQY4TBBhySC0gFN=ZJBxCYv-aPs3Kjq+DWDniq8xRcQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jan 16, 2020 at 8:28 AM Michael Paquier <michael(at)paquier(dot)xyz> wrote:
>
> On Fri, Dec 27, 2019 at 10:15:33AM +0100, Julien Rouhaud wrote:
> > I think that not using "parallel" to name this field will help to
> > avoid confusion if the lock group infrastructure is eventually used
> > for something else, but that's only true if indeed we explain what a
> > lock group is.
>
> As you already pointed out, src/backend/storage/lmgr/README includes a
> full description of this stuff under the section "Group Locking". So
> I agree that the patch ought to document your new field in a much
> better way, without mentioning the term "group locking" that's even
> better to not confuse the reader because this term not present in the
> docs at all.
>
> > The leader_pid is NULL for processes not involved in parallel query.
> > When a process wants to cooperate with parallel workers, it becomes a
> > lock group leader, which means that this field will be valued to its
> > own pid. When a parallel worker starts up, this field will be valued
> > with the leader pid.
>
> The first sentence is good to have. Now instead of "lock group
> leader", I think that we had better use "parallel group leader" as in
> other parts of the docs (see wait events for example).

Ok, I'll change this way.

> Then we just
> need to say that if leader_pid has the same value as
> pg_stat_activity.pid, then we have a group leader. If not, then it is
> a parallel worker process initially spawned by the leader whose PID is
> leader_pid (when executing Gather, Gather Merge, soon-to-be parallel
> vacuum or for a parallel btree build, but this does not need a mention
> in the docs). There could be an argument as well to have leader_pid
> set to NULL for a leader, but that would be inconsistent with what
> the PGPROC entry reports for the backend.

It would also slightly complicate things to get the full set of
backends involved in a parallel query, while excluding the leader is
entirely trivial.

> While looking at the code, I think that we could refactor things a bit
> for raw_wait_event, wait_event_type and wait_event which has some
> duplicated code for backend and auxiliary processes. What about
> filling in the wait event data after fetching the PGPROC entry, and
> also fill in leader_pid for auxiliary processes. This does not matter
> now, perhaps it will never matter (or not), but that would make the
> code much more consistent.

Yeah, I didn't think that auxiliary would be involved any time soon
but I can include this refactoring.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Julien Rouhaud 2020-01-17 16:07:55 Re: Expose lock group leader pid in pg_stat_activity
Previous Message Tomas Vondra 2020-01-17 14:40:04 Re: SlabCheck leaks memory into TopMemoryContext