Backend memory dump analysis

From: Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>
To: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Backend memory dump analysis
Date: 2018-03-23 16:18:52
Message-ID: CAB=Je-FdtmFZ9y9REHD7VsSrnCkiBhsA4mdsLKSPauwXtQBeNA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

I investigate an out of memory-related case for PostgreSQL 9.6.5, and it
looks like MemoryContextStatsDetail + gdb are the only friends there.

MemoryContextStatsDetail does print some info, however it is rarely
possible to associate the used memory with business cases.
For insance:
CachedPlanSource: 146224 total in 8 blocks; 59768 free (3 chunks); 86456
used
CachedPlanQuery: 130048 total in 7 blocks; 29952 free (2 chunks);
100096 used

It does look like a 182KiB has been spent for some SQL, however there's no
clear way to tell which SQL is to blame.

Another case: PL/pgSQL function context: 57344 total in 3 blocks; 17200
free (2 chunks); 40144 used
It is not clear what is there inside, which "cached plans" are referenced
by that pgsql context (if any), etc.

It would be great if there was an ability to dump the memory in a
machine-readable format (e.g. Java's HPROF).

Eclipse Memory Analyzer (https://www.eclipse.org/mat/) can visualize Java
memory dumps quite well, and I think HPROF format is trivial to generate
(the generation is easy, the hard part is to parse memory contents).
That is we could get analysis UI for free if PostgreSQL produces the dump.

Is it something welcome or non-welcome?
Is it something worth including in-core?

Vladimir

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Steele 2018-03-23 16:21:34 Re: PATCH: Exclude unlogged tables from base backups
Previous Message Konstantin Knizhnik 2018-03-23 16:17:03 Re: [HACKERS] Surjective functional indexes