Queries against multi-million record tables.

From: "Michael Miyabara-McCaskey" <mykarz(at)miyabara(dot)com>
To: <pgsql-sql(at)postgresql(dot)org>
Subject: Queries against multi-million record tables.
Date: 2001-01-27 21:45:25
Message-ID: 002201c088aa$75865690$c700a8c0@ncc1701e
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Hello all,

I am in the midst of taking a development DB into production, but the
performance has not been very good so far.

The DB is a decision based system, that currently has queries against tables
with up to 20million records (3GB table sizes), and at this point about a
25GB DB in total. {Later down the road up to 60million records and a DB of
up to 150GB is planned).

As I understand it, Oracle has some product called "parallel query" which
splits the table queried into 10 pieces and then does each one across as
many CPUs as possible, then puts it all back together again.

So my question is... based upon the messages I have read here, it does not
appear that PostgreSQL makes use of multiple CPUs, but only hands the next
query off to the next processor based upon operating system rules.

Therefore, what are some good ways to handle such large amounts of
information using PostgreSQL?

Michael Miyabara-McCaskey
Email: mykarz(at)miyabara(dot)com
Web: http://www.miyabara.com/mykarz/
Mobile: +1 408 504 9014

Browse pgsql-sql by date

  From Date Subject
Next Message Mark Volpe 2001-01-27 22:42:59 BTP_CHAIN errors fixed?
Previous Message Michael Davis 2001-01-27 21:04:00 RE: looping through results of a SELECT