From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Thomas Kellerer <spam_eater(at)gmx(dot)net> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Pg10 : Client Configuration for Parallelism ? |
Date: | 2019-04-19 13:43:18 |
Message-ID: | 28400.1555681398@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thomas Kellerer <spam_eater(at)gmx(dot)net> writes:
> laurent(dot)dechambe(at)orange(dot)com schrieb am 17.04.2019 um 16:33:
>> On jdbc it seems this is equivalent to write :
>> statement. setMaxRows(0); // parallelism authorized, which is the default.
>>
>> Thus on my jdbc basic program if I add :
>> statement. setMaxRows(100); // No parallelism allowed (at least in Pg10)
> This isn't limited to Statement.setMaxRows()
> If you use "LIMIT x" in your SQL query, the same thing happens.
No, not true: queries with LIMIT x are perfectly parallelizable.
The trouble with the protocol-level limit (setMaxRows) is that it
requires being able to suspend the query and resume fetching rows
later. We don't allow that for parallel query because it would
involve tying up vastly more resources, ie a bunch of worker
processes, not just some extra memory in the client's own backend.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Gaetano Mendola | 2019-04-19 20:28:38 | Re: Out of Memory errors are frustrating as heck! |
Previous Message | Thomas Kellerer | 2019-04-19 06:52:24 | Re: Pg10 : Client Configuration for Parallelism ? |