Re: Large database help

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: xbdelacour(at)yahoo(dot)com
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Large database help
Date: 2001-04-22 22:08:43
Message-ID: 6598.987977323@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

xbdelacour(at)yahoo(dot)com writes:
> Hi everyone, I'm more or less new to PostgreSQL and am trying to setup a
> rather large database for a data analysis application. Data is collected
> and dropped into a single table, which will become ~20GB. Analysis happens
> on a Windows client (over a network) that queries the data in chunks across
> parallel connections. I'm running the DB on a dual gig p3 w/ 512 memory
> under Redhat 6 (.0 I think).

> I am setting 'echo 402653184 >/proc/sys/kernel/shmmax', which is being
> reflected in top. I also specify '-B 48000' when starting postmaster.

Hm. 384M shared memory request on a 512M machine. I'll bet that the
kernel is deciding you don't need all that stuff in RAM, and is swapping
out chunks of the shared memory region to make room for processes and
its own disk buffering activity. Try a more reasonable -B setting, like
maybe a quarter of your physical RAM, max. There's no percentage in -B
large enough to risk getting swapped. Moreover, any physical RAM that
does happen to be free will be exploited by the kernel for disk
buffering at its level, so you aren't really saving any I/O by
increasing Postgres' internal buffering.

BTW, what Postgres version are you using?

regards, tom lane

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message xbdelacour 2001-04-22 22:17:56 Re: Large database help
Previous Message xbdelacour 2001-04-22 21:12:20 Large database help