Skip site navigation (1) Skip section navigation (2)

Re: patch for parallel pg_dump

From: Joachim Wieland <joe(at)mcknight(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>
Subject: Re: patch for parallel pg_dump
Date: 2012-03-29 01:54:53
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
On Wed, Mar 28, 2012 at 1:46 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I'm wondering if we really need this much complexity around shutting
> down workers.  I'm not sure I understand why we need both a "hard" and
> a "soft" method of shutting them down.  At least on non-Windows
> systems, it seems like it would be entirely sufficient to just send a
> SIGTERM when you want them to die.  They don't even need to catch it;
> they can just die.

At least on my Linux test system, even if all pg_dump processes are
gone, the server happily continues sending data. When I strace an
individual backend process, I see a lot of Broken pipe writes, but
that doesn't stop it from just writing out the whole table to a closed
file descriptor. This is a 9.0-latest server.

--- SIGPIPE (Broken pipe) @ 0 (0) ---
read(13, "\220\370\0\0\240\240r\266\3\0\4\0\264\1\320\1\0 \4
\0\0\0\0\270\237\220\0h\237\230\0"..., 8192) = 8192
read(13, "\220\370\0\0\350\300r\266\3\0\4\0\264\1\320\1\0 \4
\0\0\0\0\260\237\230\0h\237\220\0"..., 8192) = 8192
sendto(7, "d\0\0\0Acpp\t15.00000\t1245240000\taut"..., 8192, 0, NULL,
0) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
read(13, "\220\370\0\0000\341r\266\3\0\5\0\260\1\340\1\0 \4
\0\0\0\0\270\237\220\0p\237\220\0"..., 8192) = 8192
sendto(7, "d\0\0\0Dcpp\t15.00000\t1245672000\taut"..., 8192, 0, NULL,
0) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
read(13,  <unfinished ...>

I guess that
would take care of that in the future, but without it (i.e anything <=
9.2) it's quite annoying if you want to Ctrl-C a pg_dump and then have
to manually hunt down and kill all the backend processes.

I tested the above with immediately returning from
DisconnectDatabase() in pg_backup_db.c on Linux. The important thing
is that it calls PQcancel() on it's connection before terminating.

On Windows, several pages indicate that you can only cleanly terminate
a thread from within the thread, e.g. the last paragraph on

The patch is basically doing what this page is recommending.

I'll try your proposal about terminating in the child when the
connection is closed, that sounds reasonable and I don't see an
immediate problem with that.

> The existing coding of on_exit_nicely is intention, and copied from
> similar logic for on_shmem_exit in the backend. Is there really a
> compelling reason to reverse the firing order of exit hooks?

No, reversing the order was not intended. I rewrote it to a for loop
because the current implementation modifies a global variable and so
on Windows only one thread would call the exit hook.

I'll add all your other suggestions to the next version of my patch. Thanks!


In response to


pgsql-hackers by date

Next:From: Tom LaneDate: 2012-03-29 02:02:45
Subject: Re: max_files_per_process ignored on Windows
Previous:From: Joachim WielandDate: 2012-03-29 01:12:31
Subject: Re: patch for parallel pg_dump

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group