From: | jian he <jian(dot)universality(at)gmail(dot)com> |
---|---|
To: | Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> |
Cc: | Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, David Rowley <dgrowleyml(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Report planning memory in EXPLAIN ANALYZE |
Date: | 2023-08-22 07:46:44 |
Message-ID: | CACJufxHjKChgpO=64dWp=PcZ8WeREjNOUDjzxtvJizG6Oe=a0w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Aug 14, 2023 at 3:13 PM Ashutosh Bapat
<ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> wrote:
>
> On Mon, Aug 14, 2023 at 8:22 AM Andrey Lepikhov
> <a(dot)lepikhov(at)postgrespro(dot)ru> wrote:
> >
> > Really, the current approach with the final value of consumed memory
> > smooths peaks of memory consumption. I recall examples likewise massive
> > million-sized arrays or reparameterization with many partitions where
> > the optimizer consumes much additional memory during planning.
> > Ideally, to dive into the planner issues, we should have something like
> > a report-in-progress in the vacuum, reporting on memory consumption at
> > each subquery and join level. But it looks too much for typical queries.
>
> Planner finishes usually finish within a second. When partitioning is
> involved it might take a few dozens of seconds but it's still within a
> minute and we are working to reduce that as well to a couple hundred
> milliseconds at max. Tracking memory usages during this small time may
> not be worth it. The tracking itself might make the planning
> in-efficient and we might still miss the spikes in memory allocations,
> if they are very short lived. If the planner runs for more than a few
> minutes, maybe we could add some tracking.
>
> --
> Best Wishes,
> Ashutosh Bapat
>
>
Hi. I tested it.
not sure if following is desired behavior. first run with explain,
then run with explain(summary on).
the second time, Planning Memory: 0 bytes.
regression=# PREPARE q4 AS SELECT 1 AS a;
explain EXECUTE q4;
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.01 rows=1 width=4)
(1 row)
regression=# explain(summary on) EXECUTE q4;
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.01 rows=1 width=4)
Planning Time: 0.009 ms
Planning Memory: 0 bytes
(3 rows)
---------------------------------------------
previously, if you want stats of a given memory context and its
children, you can only use MemoryContextStatsDetail.
but it will only go to stderr or LOG_SERVER_ONLY.
Now, MemoryContextMemUsed is being exposed. I can do something like:
mem_consumed = MemoryContextMemUsed(CurrentMemoryContext);
//do stuff.
mem_consumed = MemoryContextMemUsed(CurrentMemoryContext) - mem_consumed;
it will give me the NET memory consumed by doing staff in between. Is
my understanding correct?
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2023-08-22 08:07:05 | Convert encrypted SSL test keys to PKCS#8 format |
Previous Message | ajitpostgres awekar | 2023-08-22 07:14:54 | Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated column's expression |