Large database help

From: xbdelacour(at)yahoo(dot)com
To: pgsql-admin(at)postgresql(dot)org
Subject: Large database help
Date: 2001-04-22 21:12:20
Message-ID: 5.0.2.1.0.20010422165107.02b46ec0@209.61.155.192
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi everyone, I'm more or less new to PostgreSQL and am trying to setup a
rather large database for a data analysis application. Data is collected
and dropped into a single table, which will become ~20GB. Analysis happens
on a Windows client (over a network) that queries the data in chunks across
parallel connections. I'm running the DB on a dual gig p3 w/ 512 memory
under Redhat 6 (.0 I think). A single index exists that gives the best case
for lookups, and the table is clustered against this index.

My problem is this: during the query process the hard drive is being tagged
excessively, while the cpu's are idling at 50% (numbers from Linux command:
top), and this is bringing down the speed pretty dramatically since the
process is waiting on the hard disk. How do I get the database to be
completely resident in memory such that selects don't cause hdd activity?
How do I pin how exactly why the hard disk is being accessed?

I am setting 'echo 402653184 >/proc/sys/kernel/shmmax', which is being
reflected in top. I also specify '-B 48000' when starting postmaster. My
test DB is only 86MB, so in theory the disk has no business being active
once the data is read into memory unless I perform a write operation.. What
am I missing?

I appreciate any help anyone could give me.

-Xavier

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2001-04-22 22:08:43 Re: Large database help
Previous Message Peter Eisentraut 2001-04-22 20:53:42 RE: Re: Install Problems