From: | "Brett W(dot) McCoy" <bmccoy(at)lan2wan(dot)com> |
---|---|
To: | Dannie M Stanley <dan(at)spinweb(dot)net> |
Cc: | pgsql-general(at)postgreSQL(dot)org |
Subject: | Re: [GENERAL] Maximum Records |
Date: | 1999-05-27 12:04:48 |
Message-ID: | Pine.LNX.4.04.9905270759130.23702-100000@dragosani.lan2wan.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 26 May 1999, Dannie M Stanley wrote:
> I am currently considering using PostgreSQL to facilitate a large database.
> I need to know if there is a maximum number of records. Or if there is a
> point at which the performance is severely cut. The database that I need to
> implement will start at around 30,000 records. If anyone could point me to
> performance comparisons or documentation on such information I would
> appreciate it.
You should have no problems. I have a database with 7 or 8 tables, most
of which have over a million rows. One table has over 2 million rows.
Performance is pretty good, especially if you index things properly. I
have a web interface to this data, via PHP3, and it takes only a few
minutes to run the queries form the web page and generate statistical
data. It's only running under Linux on a PPro 200, with 128 megs of RAM.
Brett W. McCoy
http://www.lan2wan.com/~bmccoy/
-----------------------------------------------------------------------
Dinner suggestion #302 (Hacker's De-lite):
1 tin imported Brisling sardines in tomato sauce
1 pouch Chocolate Malt Carnation Instant Breakfast
1 carton milk
From | Date | Subject | |
---|---|---|---|
Next Message | James Thompson | 1999-05-27 13:09:09 | Re: [GENERAL] ERROR: nodeRead: Bad type 0 |
Previous Message | Sebestyen Zoltan | 1999-05-27 09:12:38 | 6.4.2 on DEC Alpha |