Re: Changing types of block and chunk sizes in memory contexts

From: Peter Eisentraut <peter(at)eisentraut(dot)org>
To: Melih Mutlu <m(dot)melihmutlu(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Changing types of block and chunk sizes in memory contexts
Date: 2023-06-28 08:13:38
Message-ID: 84db0a53-7719-8114-5db8-3dac749af985@eisentraut.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> In memory contexts, block and chunk sizes are likely to be limited by
> some upper bounds. Some examples of those bounds can be
> MEMORYCHUNK_MAX_BLOCKOFFSET and MEMORYCHUNK_MAX_VALUE. Both values are
> only 1 less than 1GB.
> This makes memory contexts to have blocks/chunks with sizes less than
> 1GB. Such sizes can be stored in 32-bits. Currently, "Size" type,
> which is 64-bit, is used, but 32-bit integers should be enough to
> store any value less than 1GB.

size_t (= Size) is the correct type in C to store the size of an object
in memory. This is partially a self-documentation issue: If I see
size_t in a function signature, I know what is intended; if I see
uint32, I have to wonder what the intent was.

You could make an argument that using shorter types would save space for
some internal structs, but then you'd have to show some more information
where and why that would be beneficial. (But again, self-documentation:
If one were to do that, I would argue for introducing a custom type like
pg_short_size_t.)

Absent any strong performance argument, I don't see the benefit of this
change. People might well want to experiment with MEMORYCHUNK_...
settings larger than 1GB.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Richard Guo 2023-06-28 08:27:00 Re: Another incorrect comment for pg_stat_statements
Previous Message Yugo NAGATA 2023-06-28 08:06:04 Re: Incremental View Maintenance, take 2