From: | Peter Eisentraut <peter_e(at)gmx(dot)net> |
---|---|
To: | carl garland <carlhgarland(at)hotmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Problems with Large Databases |
Date: | 2000-06-04 01:46:32 |
Message-ID: | Pine.LNX.4.21.0006040332020.348-100000@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
carl garland writes:
> This didnt really answer the initial question of how long does it take
> to locate a table in a large 1000000+ table db and where and when do
> these lookups occur.
In the current system there are several places that do sequential scans on
pg_class (which holds information on tables and indexes). Most if these
look quite unnecessary and are on the hit-list, but using stock sources
you will definitely have performance problems.
Assuming that all of these are converted to index scans eventually, you
can test the performance yourself by creating a 1000000+ row table,
defining an index and querying it a bunch of times. At that point I
believe the file system will be at least as much of a problem.
--
Peter Eisentraut Sernanders väg 10:115
peter_e(at)gmx(dot)net 75262 Uppsala
http://yi.org/peter-e/ Sweden
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2000-06-04 01:46:46 | Re: Any monitor programs? |
Previous Message | Giles Lean | 2000-06-03 23:08:28 | Re: Re: Industrial-Strength Logging |