Re: Memory Usage and OpenBSD

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Martijn van Oosterhout <kleptog(at)svana(dot)org>
Cc: Anton Maksimenkov <anton200(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Subject: Re: Memory Usage and OpenBSD
Date: 2010-02-10 15:24:56
Message-ID: 16336.1265815496@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Martijn van Oosterhout <kleptog(at)svana(dot)org> writes:
> On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
>> Can anybody briefly explain me how one postgres process allocate
>> memory for it needs?

> There's no real maximum, as it depends on the exact usage. However, in
> general postgres tries to keep below the values in work_mem and
> maintainence_workmem. Most of the allocations are quite small, but
> postgresql has an internal allocator which means that the system only
> sees relatively large allocations. The majority will be in the order of
> tens of kilobytes I suspect.

IIRC, the complaint that started this thread was about a VACUUM command
failing. Plain VACUUM will in fact start out by trying to acquire a
single chunk of size maintenance_work_mem. (On a small table it might
not be so greedy, but on a large table it will do that.) So you
probably shouldn't ever try to set that value as large as 1GB if you're
working in a 32-bit address space. You could maybe do it if you've kept
shared_buffers small, but that seems like the wrong performance tradeoff
in most cases ...

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Kevin Grittner 2010-02-10 15:47:51 Re: [PERFORM] PostgreSQL - case studies
Previous Message Greg Stark 2010-02-10 15:09:26 Re: Best way to handle multi-billion row read-only table?