Re: Parallel Seq Scan

From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Parallel Seq Scan
Date: 2015-01-08 19:32:15
Message-ID: 54AEDB3F.1000806@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 1/5/15, 9:21 AM, Stephen Frost wrote:
> * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
>> I think it's right to view this in the same way we view work_mem. We
>> plan on the assumption that an amount of memory equal to work_mem will
>> be available at execution time, without actually reserving it.
>
> Agreed- this seems like a good approach for how to address this. We
> should still be able to end up with plans which use less than the max
> possible parallel workers though, as I pointed out somewhere up-thread.
> This is also similar to work_mem- we certainly have plans which don't
> expect to use all of work_mem and others that expect to use all of it
> (per node, of course).

I agree, but we should try and warn the user if they set parallel_seqscan_degree close to max_worker_processes, or at least give some indication of what's going on. This is something you could end up beating your head on wondering why it's not working.

Perhaps we could have EXPLAIN throw a warning if a plan is likely to get less than parallel_seqscan_degree number of workers.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2015-01-08 19:46:18 Re: Parallel Seq Scan
Previous Message Jim Nasby 2015-01-08 19:20:51 Re: VODKA?