Re: Is PostgreSQL ready for mission criticalapplications?

From: Jochen Topf <pgsql-general(at)mail(dot)remote(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Is PostgreSQL ready for mission criticalapplications?
Date: 1999-11-23 07:59:30
Message-ID: 19991123085929.A2440@eldorado.remote.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Kane Tao <death(at)solaris1(dot)mysolution(dot)com> wrote:
: The reason why opinions are so varied has alot to do with the expertise of
: each person in relation to PostgreSQL and Linux. Often problems that are
: considered simple to resolve by some are very difficult for others. And
: sometimes problems are caused by actions that are done out of inexperince
: with the system like cancelling certain operations in progress etc...
: You probably would not be able to determine reliability from opinions. The
: thing is PostgreSQL is extremely reliable if u know what you are doing and
: know how to handle/get around any bugs.

Sorry, this is simply not true. We are talking about reliability here and
not about some features that might be difficult to find for the inexperienced
user or something like that. For instance, I had to fight with PostgreSQL and
Perl to get Notify to work. It might be difficult to get this to work, because
the lack of documentation or bugs in the way it is implemented, but I got
it to work. This is the thing a beginner stumbles over, and if not persistent
enough will label as a bug, although it might be only the documentation that
is buggy, or his level of understanding of the workings of the database is
just not good enough.

But I am not imagining the random "I have rolled back the current transaction
and am going to terminate your database system connection and exit." messages.
If there is a way to kill a database as a normal user, it is not reliable.
Maybe, if I knew more about PostgreSQL, I would be able to not trigger the
bugs, but that is not the point. The bugs should not be there or there
should be at least a meaningful error message saying: "I am sorry Dave, I can't
let you do this, because it would trigger a bug." I have seen random chrashes
without any indication to the problem and I have seen strange messages
hinting at a problem deep down in a btree implementation or something like
that. And the worst thing is, that these bugs are not repeatable in a way
that someone could start debugging them or at least work around them.

To be fair, I have never lost any data (or had it corrupted) that was
already *in* the database, although there is one unresolved case, which might
have been a database corruption but was probabely an application error. But
I have lost data, because the application wasn't able to put it in the
database in the first place and the database was not accessible. But that is
probabely an application error too, because it only buffered data in memory
and not on disk, in case of a database failure. I thought that this is enough,
because databases are supposed to be more reliable then simple filesystems...

: Lookig at some of the other posts about reliability...the number of records
: in a database will mainly determine the ability of a database to maintain
: performance at larger file/index sizes. It does not really impact
: stability. Stability is mainly affected by the number of
: reads/updates/inserts that are performed. Usually u want to look at large
: user loads, large transaction loads and large number of
: updates/inserts/deletes to gauge reliability. I havent seen anyone post
: saying that they are running a system that does this...perhaps I just missed
: the post.

While this is generally true, a huge database can have an impact on
stability. For instance, if you have a very small memory leak, it will not
show in small databases but might show in big ones, triggering a bug. Or
an index grows over some bound and a hash file has to be increased or whatever.
And there are some problems of this kind in PostgreSQL. I am logging all
logins and logouts from a radius server into PostgreSQL and after it ran
well for several months, it slowed to a crawl and vacuum wouldn't work
anymore. So, yes, I do have a lot of inserts, although about 6000 inserts a
day and a total of a few hundert thausend records is not really much.

My question of an earlier posting is still not answered. Does anybody here,
who reported PostgreSQL to be very stable, use advanced features like pl/pgsql
procedures, triggers, rules and notifies? Lets have a show of hands. I would
really like to know, why I am the only one having problems. :-) Although
it might be, because, as this is a PostgreSQL mailing list, most of the
readers are people who are happy with PostgreSQL, because all the others
have left and are on an Oracle list now. :-)

I would really, really like PostgreSQL to be stable and useful for mission
critical things, because it has some very nice features, is easy to setup,
and easy to maintain and generally a lot better then all the other databases
I know, weren't it for the problems described above. I hope that my criticism
here is not perceived as PostgreSQL bashing but as an attempt to understand
why so many people are happy with PostgreSQL and I am not.

Jochen
--
Jochen Topf - jochen(at)remote(dot)org - http://www.remote.org/jochen/

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Kaare Rasmussen 1999-11-23 08:19:44 [GENERAL] Re: Is PostgreSQL ready for mission criticalapplications?
Previous Message Lincoln Yeoh 1999-11-23 06:45:16 Re: [GENERAL] Re: Is PostgreSQL ready for ...