Re: Reducing the chunk header sizes on all memory context types

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Yura Sokolov <y(dot)sokolov(at)postgrespro(dot)ru>, PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Reducing the chunk header sizes on all memory context types
Date: 2022-07-13 05:20:50
Message-ID: CAApHDvo+R56uR7Hd9d7f6+EKEeXs8azsshrAgL9HOsnj7K4-YA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, 13 Jul 2022 at 05:44, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2022-07-12 20:22:57 +0300, Yura Sokolov wrote:
> > I don't get, why "large chunk" needs additional fields for size and
> > offset.
> > Large allocation sizes are certainly rounded to page size.
> > And allocations which doesn't fit 1GB we could easily round to 1MB.
> > Then we could simply store `size>>20`.
> > It will limit MaxAllocHugeSize to `(1<<(30+20))-1` - 1PB. Doubdfully we
> > will deal with such huge allocations in near future.
>
> What would gain by doing something like this? The storage density loss of
> storing an exact size is smaller than what you propose here.

I do agree that the 16-byte additional header size overhead for
allocations >= 1GB are not really worth troubling too much over.
However, if there was some way to make it so we always had an 8-byte
header, it would simplify some of the code in places such as
AllocSetFree(). For example, (ALLOC_BLOCKHDRSZ + hdrsize +
chunksize) could be simplified at compile time if hdrsize was a known
constant.

I did consider that in all cases where the allocations are above
allocChunkLimit that the chunk is put on a dedicated block and in
fact, the blockoffset is always the same for those. I wondered if we
could use the full 60 bits for the chunksize for those cases. The
reason I didn't pursue that is because:

#define MaxAllocHugeSize (SIZE_MAX / 2)

That's 63-bits, so 60 isn't enough.

Yeah, we likely could reduce that without upsetting anyone. It feels
like it'll be a while before not being able to allocate a chunk of
memory more than 1024 petabytes will be an issue, although, I do hope
to grow old enough to one day come back here at laugh at that.

David

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2022-07-13 05:24:16 Re: Reducing the chunk header sizes on all memory context types
Previous Message David Rowley 2022-07-13 05:07:53 Re: minor change for create_list_bounds()