Re: [GENERAL] identifying performance hits: how to ???

From: The Hermit Hacker <scrappy(at)hub(dot)org>
To: Karl DeBisschop <kdebisschop(at)range(dot)infoplease(dot)com>
Cc: pgsql-general(at)postgreSQL(dot)org, rwagner(at)siac(dot)com, squires(at)com(dot)net
Subject: Re: [GENERAL] identifying performance hits: how to ???
Date: 2000-01-12 17:57:36
Message-ID: Pine.BSF.4.21.0001121356380.46499-100000@thelab.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, 12 Jan 2000, Karl DeBisschop wrote:

>
> > Anyone know if read performance on a postgres database decreases at
> > an increasing rate, as the number of stored records increase?
> >
> > It seems as if I'm missing something fundamental... maybe I am... is
> > some kind of database cleanup necessary? With less than ten
> > records, the grid populates very quickly. Beyond that, performance
> > slows to a crawl, until it _seems_ that every new record doubles the
> > time needed to retrieve...
>
> Are you using indexes?
>
> Are you vacuuming?
>
> I may have incorrectly inferred table sizes and such, but the behavior
> you describe seems odd - we typically work with hundreds of thousands
> of entries in our tables with good results (though things do slow down
> for the one DB we use with tens of millions of entries).

An example of a large database that ppl can see in action...the search
engine we are using on PostgreSQL, when fully populated, works out to
around 6million records... and is reasnably quick...

In response to

Browse pgsql-general by date

  From Date Subject
Next Message The Hermit Hacker 2000-01-12 18:00:17 Re: [GENERAL] identifying performance hits: how to ???
Previous Message Greg Youngblood 2000-01-12 17:52:52 Rules, triggers, ??? - What is the best way to enforce data-valid ation tests?