| From: | Joe Conway <jconway2(at)home(dot)com> |
|---|---|
| To: | "'pgsql-admin(at)postgresql(dot)org'" <pgsql-admin(at)postgresql(dot)org> |
| Subject: | performance |
| Date: | 2000-04-05 04:51:17 |
| Message-ID: | 01BF9E7F.E867DFA0@JEC-NT1 |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
Hello,
I'm currently working with a development database, PostgreSQL 6.5.2 on RedHat 6.1 Linux. There is one fairly large table (currently ~ 1.3 million rows) which will continue to grow at about 500k rows per week (I'm considering various options to periodically archive or reduce the collected data). Is there anything I can do to cache some or all of this table in memory in order to speed queries against it? The physical file is about 130 MB. The server is a dual Pentium Pro 200 with 512 MB of RAM.
Any suggestions would be appreciated.
Joe Conway
p.s. I tried to search the archives, but it did not return any results with even the simplest of searches.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Chris Albertson | 2000-04-05 06:28:01 | Re: performance |
| Previous Message | CTN Production | 2000-04-05 04:38:09 | Backup/restore large objects? |