On Thu, Sep 8, 2011 at 10:03 PM, Magnus Hagander <magnus(at)hagander(dot)net> wrote:
>> Would there be a way to prevent this abhorrent scenario from coming
>> into existence?
> There are plenty of clustering products out there that are really
> designed for one thing pimarily, and that's dealing with this kind of
Wouldn't those products exist to *allow* you to set up an environment
like this safely?
I think what Thom is saying is it would be nice if we could notice the
situation looks bad and *stop* the user from doing this at all.
We could do that easily if we were willing to trade off some
convenience for users who don't have shared storage by just removing
the code for determining if there's a stale lock file.
Also if the shared filesystem happened to have a working locking
server and we use the right file locking api then we would be able to
notice an apparently stale lock file that is nonetheless locked by
another postgres instance. There was some talk about using one of the
locking apis a while back.
In response to
pgsql-hackers by date
|Next:||From: Greg Stark||Date: 2011-09-09 22:54:18|
|Subject: Re: Large C files|
|Previous:||From: Daniel Farina||Date: 2011-09-09 22:02:53|
|Subject: Re: Should I implement DROP INDEX CONCURRENTLY?|