Skip site navigation (1) Skip section navigation (2)

Large database help

From: xbdelacour(at)yahoo(dot)com
To: pgsql-admin(at)postgresql(dot)org
Subject: Large database help
Date: 2001-04-22 21:12:20
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
Hi everyone, I'm more or less new to PostgreSQL and am trying to setup a 
rather large database for a data analysis application. Data is collected 
and dropped into a single table, which will become ~20GB. Analysis happens 
on a Windows client (over a network) that queries the data in chunks across 
parallel connections. I'm running the DB on a dual gig p3 w/ 512 memory 
under Redhat 6 (.0 I think). A single index exists that gives the best case 
for lookups, and the table is clustered against this index.

My problem is this: during the query process the hard drive is being tagged 
excessively, while the cpu's are idling at 50% (numbers from Linux command: 
top), and this is bringing down the speed pretty dramatically since the 
process is waiting on the hard disk. How do I get the database to be 
completely resident in memory such that selects don't cause hdd activity? 
How do I pin how exactly why the hard disk is being accessed?

I am setting 'echo 402653184 >/proc/sys/kernel/shmmax', which is being 
reflected in top. I also specify '-B 48000' when starting postmaster. My 
test DB is only 86MB, so in theory the disk has no business being active 
once the data is read into memory unless I perform a write operation.. What 
am I missing?

I appreciate any help anyone could give me.


Do You Yahoo!?
Get your free address at


pgsql-admin by date

Next:From: Tom LaneDate: 2001-04-22 22:08:43
Subject: Re: Large database help
Previous:From: Peter EisentrautDate: 2001-04-22 20:53:42
Subject: RE: Re: Install Problems

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group