Re: Delay locking partitions during query execution

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Delay locking partitions during query execution
Date: 2019-01-03 22:48:34
Message-ID: e4ad1a7c-2d71-7c2e-fb68-2e8a917d83e3@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 1/3/19 10:50 PM, David Rowley wrote:
> On Fri, 4 Jan 2019 at 02:40, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> I'm a bit confused, because I can't reproduce any such speedup. I've
>> used the attached script that varies the number of partitions (which
>> worked quite nicely in the INSERT thread), but I'm getting results like
>> this:
>>
>> partitions 0 100 1000 10000
>> --------------------------------------------
>> master 49 1214 186 11
>> patched 53 1225 187 11
>>
>> So I don't see any significant speedup, for some reason :-(
>>
>> Before I start digging into this, is there something that needs to be
>> done to enable it?
>
> Thanks for looking at this.
>
> One thing I seem to quite often forget to mention is that I was running with:
>
> plan_cache_mode = force_generic_plan
> max_parallel_workers_per_gather = 0;
>
> Without changing plan_cache_mode then the planner would likely never
> favour a generic plan since it will not appear to be very efficient
> due to the lack of consideration to the costing of run-time partition
> pruning.
>
> Also, then with a generic plan, the planner will likely want to build
> a parallel plan since it sees up to 10k partitions that need to be
> scanned. max_parallel_workers_per_gather = 0 puts it right.
>
> (Ideally, the planner would cost run-time pruning, but it's not quite
> so simple for RANGE partitions with non-equality operators. Likely
> we'll want to fix that one day, but that's not for here)
>

Nope, that doesn't seem to make any difference :-( In all cases the
resulting plan (with 10k partitions) looks like this:

test=# explain analyze select * from hashp where a = 13442;

QUERY PLAN
-----------------------------------------------------------------------
Append (cost=0.00..41.94 rows=13 width=4)
(actual time=0.018..0.018 rows=0 loops=1)
-> Seq Scan on hashp6784 (cost=0.00..41.88 rows=13 width=4)
(actual time=0.017..0.018 rows=0 loops=1)
Filter: (a = 13442)
Planning Time: 75.870 ms
Execution Time: 0.471 ms
(5 rows)

and it doesn't change (the timings on shape) no matter how I set any of
the GUCs.

Furthermore, I've repeatedly ran into this issue:

test=# \d hashp
ERROR: unrecognized token: "false"
LINE 2: ...catalog.array_to_string(array(select rolname from pg_catalog...
^
I have no idea why it breaks like this, and it's somewhat random (i.e.
not readily reproducible). But I've only ever seen it with this patch
applied.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Daniel Heath 2019-01-03 22:53:57 Re: Custom text type for title text
Previous Message Fabrízio de Royes Mello 2019-01-03 22:47:27 Re: Custom text type for title text