Re: [HACKERS] Cached plans and statement generalization

From: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
To: "Yamaji, Ryo" <yamaji(dot)ryo(at)jp(dot)fujitsu(dot)com>
Cc: PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] Cached plans and statement generalization
Date: 2018-08-07 12:36:16
Message-ID: 3a054e51-448e-a66d-074e-099a3f3f62b2@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 07.08.2018 13:02, Yamaji, Ryo wrote:
>
> I want to confirm one point.
> If I will have reviewed the autoprepare patch, then are you ready to register
> the patch at commit fest in the near future? I fear that autoprepare patch do
> not registered at commit fest in the future (for example, you are so busy), and
> do not applied to PostgreSQL. If you are not ready to register the patch, I think
> I want to register at commit fest instead of you.

I have registered the patch for next commitfest.
For some reasons it doesn't find the latest autoprepare-10.patch and
still refer to autoprepare-6.patch as the latest attachement.

>
>
>> I agree it may be more useful to limit amount of memory used by prepare
>> queries, rather than number of prepared statements.
>> But it is just more difficult to calculate and maintain (I am not sure
>> that just looking at CacheMemoryContext is enough for it).
>> Also, if working set of queries (frequently repeated queries) doesn't
>> fir in memory, then autoprepare will be almost useless (because with
>> high probability
>> prepared query will be thrown away from the cache before it can be
>> reused). So limiting queries from "application side" seems to be more
>> practical.
> I see. But I fear that autoprepare process uses irregularity amount of memory
> when autoprepare_limit is specified number of prepared statements. I think
> that there is scene that autoprepare process use a lot of memory (ex. it
> need to prepare a lot of long queries), then other processes (ex. other
> backend process in PostgreSQL or process other than PostgreSQL) cannot use
> memory. I hope to specify limit amount of memory in the future.

Right now each prepared statement has two memory contexts: one for raw
parse tree used as hash table key and another for cached plan itself.
May be it will be possible to combine them. To calculate memory consumed
by cached plans, it will be necessary to calculate memory usage
statistic for all this contexts (which requires traversal of all
context's chunks) and sum them. It is much more complex and expensive
than current check: (++autoprepare_cached_plans > autoprepare_limit)
although I so not think that it will have measurable impact on
performance...
May be there should be some faster way to estimate memory consumed by
prepared statements.

So, the current autoprepare_limit allows to limit number of autoprepared
statements and prevent memory overflow caused by execution of larger
number of different statements.
The question is whether we need more precise mechanism which will take
in account difference between small and large queries. Definitely simple
query can require 10-100 times less memory than complex query. But
memory contexts themselves (even with small block size) somehow minimize
difference in memory footprint of different queries, because of chunked
allocation.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2018-08-07 12:40:12 Re: ATTACH/DETACH PARTITION CONCURRENTLY
Previous Message Lætitia Avrot 2018-08-07 12:29:43 Re: Constraint documentation