Re: Critical performance problems on large databases

From: Andrew Sullivan <andrew(at)libertyrms(dot)info>
To: PostgreSQL general list <pgsql-general(at)postgresql(dot)org>
Subject: Re: Critical performance problems on large databases
Date: 2002-04-11 14:26:41
Message-ID: 20020411102641.E19037@mail.libertyrms.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Apr 11, 2002 at 02:05:07PM +0100, Nigel J. Andrews wrote:

> On 11 Apr 2002, Bill Gribble wrote:

> > Then the biggest slowdown is count(*), which we have to do in order to
> > fake up the scrollbar (so we know what proportion of the data has been
> > scrolled through).

> seqscan. However, I can see that adding triggers to insert etc. in
> a table and maintain counts is going to hit the data loading time
> but is going to speed the count accesses tremendously.

I suspect the trigger would be pretty miserable for inserts. But why
use a trigger? If you need just pretty-close results, you could have
a process that runs (say) every 10 minutes which updated a table of
stats. (Naturally, if you need really accurate numbers, that's no
help. But If the idea is just a "record _r_ of _nnnn_ results" sort
of message, you can do what large databases have been doing for
years: if a query returns more than some small-ish number of records,
look in the stats table. Then you say "record _r_ of approximately
_nnnn_ results".

A

--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew(at)libertyrms(dot)info> M6K 3E3
+1 416 646 3304 x110

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Richard Emberson 2002-04-11 14:29:31 de-reference a refcursor in JDBC????
Previous Message Jean-Michel POURE 2002-04-11 14:25:34 Re: Transactional vs. Read-only (Retrieval) database