Re: Make MemoryContextMemAllocated() more precise

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Tomas Vondra <tomas(dot)vondra(at)postgresql(dot)org>
Subject: Re: Make MemoryContextMemAllocated() more precise
Date: 2020-03-19 18:11:31
Message-ID: 20200319181131.vw7kufl22u24tplw@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Mar 19, 2020 at 11:44:05AM -0400, Robert Haas wrote:
>On Mon, Mar 16, 2020 at 2:45 PM Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
>> Attached is a patch that makes mem_allocated a method (rather than a
>> field) of MemoryContext, and allows each memory context type to track
>> the memory its own way. They all do the same thing as before
>> (increment/decrement a field), but AllocSet also subtracts out the free
>> space in the current block. For Slab and Generation, we could do
>> something similar, but it's not as much of a problem because there's no
>> doubling of the allocation size.
>>
>> Although I think this still matches the word "allocation" in spirit,
>> it's not technically correct, so feel free to suggest a new name for
>> MemoryContextMemAllocated().
>
>Procedurally, I think that it is highly inappropriate to submit a
>patch two weeks after the start of the final CommitFest and then
>commit it just over 48 hours later without a single endorsement of the
>change from anyone.
>

True.

>Substantively, I think that whether or not this is improvement depends
>considerably on how your OS handles overcommit. I do not have enough
>knowledge to know whether it will be better in general, but would
>welcome opinions from others.
>

Not sure overcommit is a major factor, and if it is then maybe it's the
strategy of doubling block size that's causing problems.

AFAICS the 2x allocation is the worst case, because it only happens
right after allocating a new block (of twice the size), when the
"utilization" drops from 100% to 50%. But in practice the utilization
will be somewhere in between, with an average of 75%. And we're not
doubling the block size indefinitely - there's an upper limit, so over
time the utilization drops less and less. So as the contexts grow, the
discrepancy disappears. And I'd argue the smaller the context, the less
of an issue the overcommit behavior is.

My understanding is that this is really just an accounting issue, where
allocating a block would get us over the limit, which I suppose might be
an issue with low work_mem values.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2020-03-19 18:17:16 Re: Adding missing object access hook invocations
Previous Message Stephen Frost 2020-03-19 18:08:55 Re: GSoC applicant proposal, Uday PB