From: | "Mag Gam" <magawake(at)gmail(dot)com> |
---|---|
To: | "PostgreSQL List - Novice" <pgsql-novice(at)postgresql(dot)org> |
Subject: | tuning question |
Date: | 2008-12-12 01:09:32 |
Message-ID: | 1cbd6f830812111709u67198f90s5e1542ae3c7accc7@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Hello All,
Running 8.3.4. My situation is a little unique. I am running on a 1
core with 2GB of memory on Redhat Linux 5.2. My entire installation of
pgsql is about 8GB (compressed) from pgdump. I have 6 databases The
data is keep growing since I plan to add more field to my database and
it will increase dramatically.
My goal is I don't want to use a lot of memory! My storage is faily
fast, I can do about 250Mb/sec (sustained). I would like to leverage
my I/O instead of memory, eventhough I will suffer performance
problems. Also, is it possible to make the database data logs files
(the binary files) large? I am thinking of making them the size of 1G
each instead of very small files? My FS does better performance for
larger files...
Any ideas?
TIA
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Hyatt | 2008-12-12 02:31:21 | Connecting via postgresql jdbc |
Previous Message | Ola Ekedahl | 2008-12-08 09:06:46 | Re: Function and trigger |