> Mark Woodward wrote:
>> I'm not sure that I agree. At least in my experience, I wouldn't have
>> than one installation of PostgreSQL in a production machine. It is
>> potentially problematic.
> I agree with you for production environments, but for development, test,
> support (and pre-sales) machines there are reasonable requirements for
Oh, sure, for dev, margeting, etc. It doesn't matter. When you have to
manage a thousand systems, standards save tons of work.
> Even if you have only one installation - something to tell you *where*
> the binaries are installed is convenient - as there are quite a few
> common locations (e.g. packages installing in /usr or /usr/local, source
> builds in /usr/local/pgsql or /opt/pgsql). I've seen many *uncommon*
> variants: (e.g. /usr/local/postgresql, /usr/local/postgresql-<version>,
> /usr/local/pgsql/<version>, ...).
> Admittedly, given that the binaries are likely to be in the
> cluster-owners default PATH, it is not as hard to find them as the data
> directory. However, this is all about convenience it would seem, since
> (for many *nix platforms) two simple searches will give you most of what
> is needed:
> $ locate postmaster
> $ locate pg_hba.conf
That's not the issue.
I find it frustrating sometimes because when I describe one scenario,
people debate it using other scenarios. Maybe I lack the communications
skills to convey the problem accurately.
Lets say you are an admin at XYZ Services Corp. You have 20 data centers
world wide. In each data center, you have 10 to 1000 PostgreSQL servers.
Through your VPN you can access any one of the machines in any data center
through a simple IP address, thanks to your VPN.
One of your databases crashed because it ran out of disk space. (someone
forgot to check the free space often enough.) The CIO, rightfully, now
requires a weekly database free space report. From this report you track
trends and etc.
Now, if there were a standard file from which you could "see" what
databases are installed and running on this system, you could write a
scp $HOST:/usr/local/pgsql/etc/pg_clusters.conf $HOST.conf
ssh $HOST "df" > $HOST.df
rptdbfree.pl $HOST.conf $HOST.df
The "rptdbfree.pl" is a perl script to parse the pg_clusters.conf and
extract the volumes on which the databases reside. The "df" file has the
volume information for each disk.
You could run this over night to find the state of all your databases.
You are the same admin at the same XYZ corp. An electrician pulls the
breaker in the data center and your systems go down (This actually happend
to one installation I worked on). A couple of the admins in charge of some
of the boxes are on vacation and "accidentally" forgot to bring their cell
A few of the systems didn't come up correctly. You need to find the
correct databases. Unfortunately there are more database cluster
directories than there should be, and the admin hadn't yet documented
which was which. You don't even know how to test if they are.
Your site is down, you are very stressed, you are cursing the guy that
didn't write this stuff down. Since there is no facility to bring up
multiple PG databases, there is no standard to follow.
Wouldn't it be nice, to be able to do "pg_ctl startall?" Or better yet,
just have this in the startup?
I'm not saying that we abandon how it is currently done, I'm just
suggesting that we provide the facilities to help enterprise solutions.
In response to
pgsql-hackers by date
|Next:||From: Tom Lane||Date: 2006-02-22 14:53:45|
|Subject: Re: PostgreSQL unit tests |
|Previous:||From: Dhanaraj||Date: 2006-02-22 14:13:30|
|Subject: A doubt..|