| From: | Rod Taylor <pg(at)rbt(dot)ca> |
|---|---|
| To: | "Alex J(dot) Avriette" <alex(at)posixnap(dot)net> |
| Cc: | Andreas Pflug <pgadmin(at)pse-consulting(dot)de>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: RFC: Very large scale postgres support |
| Date: | 2004-02-09 01:01:38 |
| Message-ID: | 1076288497.48991.102.camel@jester |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
> The fact is, there are situations in which such extreme traffic is
> warranted. My concern is that I am not able to use postgres in such
> situations because it cannot scale to that level. I feel that it would
> be possible to reach that level with support in the postmaster for
> replication.
Replication won't help if those are all mostly write transactions. If a
small percentage, even 1% would be challenging, is INSERTS, UPDATES or
DELETES, master / slave replication might get you somewhere.
Otherwise you're going to need to partition the data up into smaller,
easily managed sizes -- that of course requires an ability to
horizontally partition the data.
Anyway, if you want a sane answer we need more information about the
data (is it partitionable?), schema type, queries producing the load
(simple or complex), acceptable data delays (does a new insert need to
be immediately visible?), etc.
Dealing with a hundred thousand queries/second isn't just challenging to
PostgreSQL, you will be hard pressed to find the hardware that will push
that much data around even with the overhead of the database itself.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alex J. Avriette | 2004-02-09 02:01:10 | Re: RFC: Very large scale postgres support |
| Previous Message | Alex J. Avriette | 2004-02-08 23:42:38 | Re: RFC: Security documentation |