From: | "Max" <maxdl(at)adelphia(dot)net> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Splitting queries across servers |
Date: | 2005-01-29 18:04:20 |
Message-ID: | CDEJIJMPHJJNHGFMBPBKCEPMFGAA.maxdl@adelphia.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> -----Original Message-----
> From: pgsql-general-owner(at)postgresql(dot)org
> [mailto:pgsql-general-owner(at)postgresql(dot)org]On Behalf Of Dann Corbit
> Sent: Friday, January 28, 2005 12:01 PM
> To: William Yu; pgsql-general(at)postgresql(dot)org
> Subject: Re: [GENERAL] Splitting queries across servers
>
>
> Suppose that you currently need 16 GB to cache everything now.
> I would install (perhaps) 32 GB ram for the initial configuration.
>
Good point. Adding memory as I need it.
> The price of memory drops exponentially, and so waiting for the price to
> drop will give a much lower expense for the cost of the RAM.
>
> The reason to double the ram is the expense of upgrading in terms of
> labor and downtime for the computer. That can be very significant. So
> if we double the ram, that should give one or (hopefully) two years
> safety margin.
Downtime is a big deal, however I am planning on using replication with
pgpool.
> If the database is expected to grow exponentially fast, then that is
> another issue. In such a case, if it can be cost justified, put on the
> largest memory volume that is possible given your financial limitations.
We can't really forecast the growing curve. My bet is that we have a short
term (6 months) need of 32 GB, so I'll just double that and it should give
us visibility for about a year. I hope!
I just realized I never asked that question: What is the maximum size of a
postgresql DB. Can it be anything ?
Max
From | Date | Subject | |
---|---|---|---|
Next Message | Max | 2005-01-29 18:20:39 | Re: Splitting queries across servers |
Previous Message | Max | 2005-01-29 17:57:48 | Re: Splitting queries across servers |