identifying performance hits: how to ???

From: "Robert Wagner" <rwagner(at)siac(dot)com>
To: pgsql-general(at)postgreSQL(dot)org
Cc: squires(at)com(dot)net
Subject: identifying performance hits: how to ???
Date: 2000-01-12 15:37:13
Message-ID: 85256864.00542073.00@SIAC_NOTES_001.wisdom.siac.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello All,

Anyone know if read performance on a postgres database decreases at an
increasing rate, as the number of stored records increase?

This is a TCL app, which makes entries into a single, table and from time
to time repopulates a grid control. It must rebuild the data in the grid
control, because other clients have since written to the same table.

It seems as if I'm missing something fundamental... maybe I am... is some
kind of database cleanup necessary? With less than ten records, the grid
populates very quickly. Beyond that, performance slows to a crawl, until
it _seems_ that every new record doubles the time needed to retrieve the
records. My quick fix was to cache the data locally in TCL, and only
retrieve changed data from the database. But now as client demand
increases, as well as the number of clients making changes to the table,
I'm reaching the bottleneck again.

The client asked me yesterday to start evaluating "more mainstream"
databases, which means that they're pissed off. Postgres is fun to work
with, but it's hard to learn about, and hard to justify to clients.

By the way, I have experimented with populating the exact same grid control
on Windows NT, using MS Access (TCL runs just about anywhere). The grid
seemed to populate just about instantaneously. So, is the bottleneck in
Unix, in Postgres, and does anybody know how to make it faster?

Cheers,
Rob

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bruce Momjian 2000-01-12 16:11:12 Simulating an outer join
Previous Message Sarah Officer 2000-01-12 15:34:06 Re: [GENERAL] How do you live without OUTER joins?