Re: [HACKERS] Postgres Performance

From: Michael Simms <grim(at)argh(dot)demon(dot)co(dot)uk>
To: ramirez(at)doc(dot)mssm(dot)edu (Edwin Ramirez)
Cc: pgsql-hackers(at)postgreSQL(dot)org (PostgreSQL-development)
Subject: Re: [HACKERS] Postgres Performance
Date: 1999-09-08 21:41:04
Message-ID: 199909082141.WAA00576@argh.demon.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

>
> If I do a large search the first time is about three times slower than
> any subsequent overlapping (same data) searches. I would like to always
> get the higher performance.
>
> How are the buffers that I specify to the postmaster used?
> Will increasing this number improve things?
>
> The issue that I am encountering is that no matter how much memory I
> have on a computer, the performance is not improving. I am willing to
> fund a project to implement a postgres specific, user configurable
> cache.
>
> Any ideas?
> -Edwin S. Ramirez-

I think that the fact you are seeing an improvement already shows a good level
of caching.

What happens the first time is that it must read the data off the disc. After
that the data comes from memory IF it is cached. Disc read will always be
slower with current disc technology.

I would imagine (Im not an expert, but through observation) that if you
drasticly increase the number of shared memory buffers, then when you
startup your front-end simply do a select * from the tables, it may even keep
them all in memory from the start.

M Simms

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 1999-09-08 21:42:45 Re: [HACKERS] Re: Problem enabling pltcl
Previous Message Tom Lane 1999-09-08 21:33:32 Re: [HACKERS] PG_UPGRADE status?