Re: PGSQL with high number of database rows?

From: Listmail <lists(at)peufeu(dot)com>
To: "Tim Perrett" <hello(at)timperrett(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: PGSQL with high number of database rows?
Date: 2007-04-03 17:44:58
Message-ID: op.tp7x88h8zcizji@apollo13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


> Are there any implications with possibly doing this? will PG handle it?
> Are there realworld systems using PG that have a massive amount of data
> in them?

It's not how much data you have, it's how you query it.

You can have a table with 1000 rows and be dead slow if said rows are big
TEXT data and you seq-scan it in its entierety on every webpage hit your
server gets...
You can have a terabyte table with billions of row, and be fast if you
know what you're doing and have proper indexes.

Learning all this is very interesting. MySQL always seemed hostile to me,
but postgres is friendly, has helpful error messages, the docs are great,
and the developer team is really nice.

The size of your data has no importance (unless your disk is full), but
the size of your working set does.

So, if you intend on querying your data for a website, for instance,
where the user searches data using forms, you will need to index it
properly so you only need to explore small sections of your data set in
order to be fast.

If you intend to scan entire tables to generate reports or statistics,
you will be more interested in knowing if the size of your RAM is larger
or smaller than your data set, and about your disk throughput.

So, what is your application ?

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Listmail 2007-04-03 17:49:24 Re: Webappication and PostgreSQL login roles
Previous Message Martin Gainty 2007-04-03 17:38:38 Re: Using C# to create stored procedures