Re: copy.c allocation constant

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Bruce Momjian <bruce(at)momjian(dot)us>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: copy.c allocation constant
Date: 2018-01-25 00:48:22
Message-ID: 17438.1516841302@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox
Thread:
Lists: pgsql-hackers

Bruce Momjian <bruce(at)momjian(dot)us> writes:
> On Thu, Jan 25, 2018 at 09:30:54AM +1300, Thomas Munro wrote:
>> On Thu, Jan 25, 2018 at 7:19 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>> My guess is that a fairly common pattern for larger chunks will be to
>>> round the size up to a multiple of 4kB, the usual memory page size.
>>
>> See also this discussion:
>> https://www.postgresql.org/message-id/flat/CAEepm%3D1bRyd%2B_W9eW-QmP1RGP03ti48zgd%3DK11Q6o4edQLgkcg%40mail(dot)gmail(dot)com#CAEepm=1bRyd+_W9eW-QmP1RGP03ti48zgd=K11Q6o4edQLgkcg(at)mail(dot)gmail(dot)com
>> TL;DR glibc doesn't actually round up like that below 128kB, but many
>> others including FreeBSD, macOS etc round up to various page sizes or
>> size classes including 8kB (!), 512 bytes. I find this a bit
>> frustrating because it means that the most popular libc implementation
>> doesn't have the problem so this kind of thing probably isn't a high
>> priority, but probably on most other Unices (and I have no clue for
>> Windows) including my current favourite we waste a bunch of memory.

> The BSD memory allocator used to allocate in powers of two, and keep the
> header in a separate location. They did this so they could combine two
> free, identically-sized memory blocks into a single one that was double
> the size. I have no idea how it works now.

It seems like there's fairly good reason to suppose that most versions
of malloc are efficient for power-of-2 request sizes. Our big problem
is the overhead that we add ourselves. That, however, we could compensate
for by adjusting request sizes in cases where the request is flexible.
I'd definitely support some sort of memory context API to allow such
adjustment, as we speculated about in the thread Thomas points to above.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2018-01-25 01:05:13 Re: [HACKERS] [PATCH]make pg_rewind to not copy useless WAL files
Previous Message Thomas Munro 2018-01-25 00:39:51 Re: [HACKERS] SERIALIZABLE with parallel query