Re: Parallel Seq Scan

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Bert <biertie(at)gmail(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Parallel Seq Scan
Date: 2015-11-20 04:59:15
Message-ID: CAA4eK1JG4m-47VC87iGQKKedzkGQ0EyZwC+HhxchqSrUj2YajA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Nov 19, 2015 at 9:29 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Wed, Nov 18, 2015 at 10:41 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > I think whats going on here is that when any of the session doesn't
> > get any workers, we shutdown the Gather node which internally destroys
> > the dynamic shared memory segment as well. However the same is
> > needed as per current design for doing scan by master backend as
> > well. So I think the fix would be to just do shutdown of workers which
> > actually won't do anything in this scenario.
>
> It seems silly to call ExecGatherShutdownWorkers() here when that's
> going to be a no-op. I think we should just remove that line and the
> if statement before it altogether and replace it with a comment
> explaining why we can't nuke the DSM at this stage.
>

Isn't it better to destroy the memory for readers array as that gets
allocated
even if there are no workers available for execution?

Attached patch fixes the issue by just destroying readers array.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
fix_early_dsm_destroy_v2.patch application/octet-stream 894 bytes

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Kouhei Kaigai 2015-11-20 05:11:28 Re: Foreign join pushdown vs EvalPlanQual
Previous Message Michael Paquier 2015-11-20 04:26:15 Re: documentation for wal_retrieve_retry_interval