Re: Memory Accounting

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Melanie Plageman <melanieplageman(at)gmail(dot)com>, Jeff Davis <pgsql(at)j-davis(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Memory Accounting
Date: 2019-09-24 05:21:40
Message-ID: 20190924052140.GA1982@paquier.xyz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Jul 24, 2019 at 11:52:28PM +0200, Tomas Vondra wrote:
> I think Heikki was asking about places with a lot of sub-contexts, which a
> completely different issue. It used to be the case that some aggregates
> created a separate context for each group - like array_agg. That would
> make Jeff's approach to accounting rather inefficient, because checking
> how much memory is used would be very expensive (having to loop over a
> large number of contexts).

The patch has been marked as ready for committer for a week or so, but
it seems to me that this comment has not been addressed, no? Are we
sure that we want this method if it proves to be inefficient when
there are many sub-contexts and shouldn't we at least test such
scenarios with a worst-case, customly-made, function?
--
Michael

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Fujii Masao 2019-09-24 05:25:03 recovery starting when backup_label exists, but not recovery.signal
Previous Message David Fetter 2019-09-24 04:30:18 Re: Efficient output for integer types