Re: [DESIGN] ParallelAppend

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Thom Brown <thom(at)linux(dot)com>
Cc: Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Subject: Re: [DESIGN] ParallelAppend
Date: 2015-11-17 20:08:23
Message-ID: CA+TgmoaATP3p8MP+CHJ2kzp6OTUnhX3hNCV1sEyiRF-B+R1vZg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Nov 17, 2015 at 4:26 AM, Thom Brown <thom(at)linux(dot)com> wrote:
> Okay, I've tried this patch.

Thanks!

> Yes, it's working!

Woohoo.

> However, the first parallel seq scan shows it getting 170314 rows.
> Another run shows it getting 194165 rows. The final result is
> correct, but as you can see from the rows on the Append node (59094295
> rows), it doesn't match the number of rows on the Gather node
> (30000000).

Is this the same issue reported in
http://www.postgresql.org/message-id/CAFj8pRBF-i=qDg9b5nZrXYfChzBEZWmthxYPhidQvwoMOjHtzg@mail.gmail.com
and not yet fixed? I am inclined to think it probably is.

> And also, for some reason, I can no longer get this using more than 2
> workers, even with max_worker_processes = 16 and max_parallel_degree =
> 12. I don't know if that's anything to do with this patch though.

The number of workers is limited based on the size of the largest
table involved in the Append. That probably needs considerable
improvement, of course, but this patch is still a step forward over
not-this-patch.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2015-11-17 20:10:48 Re: [PATCH] SQL function to report log message
Previous Message Pavel Stehule 2015-11-17 20:07:40 Re: proposal: multiple psql option -c