Re: Delay locking partitions during query execution

From: Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Delay locking partitions during query execution
Date: 2019-01-29 06:42:43
Message-ID: fe648f94-9cff-4282-dd12-a1a5bf64cd3b@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2019/01/28 20:27, David Rowley wrote:
> On Mon, 28 Jan 2019 at 20:45, Amit Langote
> <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>> It seems to me that plancache.c doesn't really need to perform
>> AcquireExecutorLocks()/LockRelationOid() to learn that a partition's
>> reloptions property has changed to discard a generic plan and build a new
>> one. AFAICT, PlanCacheRelCallback() takes care of resetting a cached plan
>> by observing that an invalidation message that it received either from
>> the same session or from another session belongs to one of the relations
>> in PlannedStmt.relationOids. That list must already contain all
>> partitions' OIDs.
>
> Really? So when you tried my case you properly got a plan with a
> non-parallel Seq Scan on listp1?
>
> I imagine you didn't with yours since we check for relcache
> invalidations at the start of a transaction. I performed both my
> EXECUTEs in the same transaction.

Yeah, I performed each EXECUTE in a new transaction, so not the same case
as yours. Sorry about the noise.

However, I tried the example as you described and the plan *doesn't*
change due to concurrent update of reloptions with master (without the
patch) either.

session 1:
begin;
prepare q1 as select count(*) from listp;
explain (costs off, analyze, timing off, summary off) execute q1;
QUERY PLAN

─────────────────────────────────────────────────────────────────────────────────
Finalize Aggregate (actual rows=1 loops=1)
-> Gather (actual rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (actual rows=1 loops=3)
-> Parallel Append (actual rows=66667 loops=3)
-> Parallel Seq Scan on listp1 (actual rows=33333
loops=3)
-> Parallel Seq Scan on listp2 (actual rows=50000
loops=2)
(8 rows)

session 2:
alter table listp1 set (parallel_workers=0);

session 1:
explain (costs off, analyze, timing off, summary off) execute q1;
QUERY PLAN

─────────────────────────────────────────────────────────────────────────────────
Finalize Aggregate (actual rows=1 loops=1)
-> Gather (actual rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (actual rows=1 loops=3)
-> Parallel Append (actual rows=66667 loops=3)
-> Parallel Seq Scan on listp1 (actual rows=33333
loops=3)
-> Parallel Seq Scan on listp2 (actual rows=50000
loops=2)
(8 rows)

I then built master with -DCATCACHE_FORCE_RELEASE and the plan does
change, but because of syscache misses here and there resulting in opening
the erring system catalog which then does AcceptInvalidationMessages().

In the normal build, invalidation messages don't seem to be processed even
by calling AcquireExecutorLocks(), which is perhaps not normal. It seem
that the following condition in LockRelationOid is never true when called
from AcquireExecutorLocks:

if (res != LOCKACQUIRE_ALREADY_CLEAR)
{
AcceptInvalidationMessages();
MarkLockClear(locallock);
}

Bug, maybe?

Thanks,
Amit

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2019-01-29 07:18:46 Re: COPY FROM WHEN condition
Previous Message Andrey Borodin 2019-01-29 06:38:41 Re: Covering GiST indexes