RE: [HACKERS] Cached plans and statement generalization

From: "Yamaji, Ryo" <yamaji(dot)ryo(at)jp(dot)fujitsu(dot)com>
To: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
Cc: PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: RE: [HACKERS] Cached plans and statement generalization
Date: 2018-08-22 04:54:18
Message-ID: 9A6E5062D5D4DB458C80C2B2920BD71D5C6B5C@g01jpexmbkw23
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tuesday, August 7, 2018 at 0:36 AM, Konstantin Knizhnik wrote:

> I have registered the patch for next commitfest.
> For some reasons it doesn't find the latest autoprepare-10.patch and still
> refer to autoprepare-6.patch as the latest attachement.

I am sorry for the long delay in my response.
I confirmed it and become reviewer the patch.
I am going to review.

> Right now each prepared statement has two memory contexts: one for raw
> parse tree used as hash table key and another for cached plan itself.
> May be it will be possible to combine them. To calculate memory consumed
> by cached plans, it will be necessary to calculate memory usage statistic
> for all this contexts (which requires traversal of all context's chunks)
> and sum them. It is much more complex and expensive than current check:
> (++autoprepare_cached_plans > autoprepare_limit) although I so not think
> that it will have measurable impact on performance...
> May be there should be some faster way to estimate memory consumed by
> prepared statements.
>
> So, the current autoprepare_limit allows to limit number of autoprepared
> statements and prevent memory overflow caused by execution of larger
> number of different statements.
> The question is whether we need more precise mechanism which will take
> in account difference between small and large queries. Definitely simple
> query can require 10-100 times less memory than complex query. But memory
> contexts themselves (even with small block size) somehow minimize
> difference in memory footprint of different queries, because of chunked
> allocation.

Thank you for telling me how to implement and its problem.
In many cases when actually designing a system, we estimate the amount of memory
to use and prepare a memory accordingly. Therefore, if we need to estimate memory
usage in both patterns that are limit amount of memory used by prepare queries or
number of prepared statements, I think that there is no problem with the current
implementation.

Apart from the patch, if it can implement an application that outputs estimates of
memory usage when we enter a query, you may be able to use autoprepare more comfortably.
But it sounds difficult..

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2018-08-22 05:36:37 Re: Improve behavior of concurrent ANALYZE/VACUUM
Previous Message Chris Travers 2018-08-22 04:20:27 Re: Two proposed modifications to the PostgreSQL FDW