Re: Reading data in bulk - help?

From: William Yu <wyu(at)talisys(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Reading data in bulk - help?
Date: 2003-09-10 22:08:48
Message-ID: bjo7db$1v53$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> 1) Memory - clumsily adjusted shared_buffer - tried three values: 64,
> 128, 256 with no discernible change in performance. Also adjusted,
> clumsily, effective_cache_size to 1000, 2000, 4000 - with no discernible
> change in performance. I looked at the Admin manual and googled around
> for how to set these values and I confess I'm clueless here. I have no
> idea how many kernel disk page buffers are used nor do I understand what
> the "shared memory buffers" are used for (although the postgresql.conf
> file hints that it's for communication between multiple connections).
> Any advice or pointers to articles/docs is appreciated.

The standard procedure is 1/4 of your memory for shared_buffers. Easiest
way to calculate would be ###MB / 32 * 1000. E.g. if you have 256MB of
memory, your shared_buffers should be 256 / 32 * 1000 = 8000.

The remaining memory you have leftover should be "marked" as OS cache
via the effective_cache_size setting. I usually just multiply the
shared_buffers value by 3 on systems with a lot of memory. With less
memory, OS/Postgres/etc takes up a larger percentage of memory so values
of 2 or 2.5 would be more accurate.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Christopher Browne 2003-09-10 22:22:04 Re: [GENERAL] how to get accurate values in pg_statistic (continued)
Previous Message Chris Huston 2003-09-10 20:59:50 Re: Reading data in bulk - help?