Re: Memory Usage and OpenBSD

From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: Anton Maksimenkov <anton200(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Subject: Re: Memory Usage and OpenBSD
Date: 2010-02-10 07:42:10
Message-ID: 20100210074209.GB18442@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
> It means that on openbsd i386 we have about 2,2G of virtual space for
> malloc, shm*. So, postgres will use that space.
>
> But mmap() use random addresses. So when you get big chunk of memory
> for shared buffers (say, 2G) then you may get it somewhere in middle
> of virtual space (2,2G).

This is essentially the reason why it's not a good idea to use really
large amounts of shared_buffers on 32-bit systems: there isn't the
address space to support it.

> Can anybody briefly explain me how one postgres process allocate
> memory for it needs?
> I mean, what is the biggest size of malloc() it may want? How many
> such chunks? What is the average size of allocations?

There's no real maximum, as it depends on the exact usage. However, in
general postgres tries to keep below the values in work_mem and
maintainence_workmem. Most of the allocations are quite small, but
postgresql has an internal allocator which means that the system only
sees relatively large allocations. The majority will be in the order of
tens of kilobytes I suspect.

Have a nice day,
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while
> boarding. Thank you for flying nlogn airlines.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2010-02-10 07:52:39 Re: Best way to handle multi-billion row read-only table?
Previous Message John R Pierce 2010-02-10 07:35:43 Re: dump of 700 GB database