On Thu, 26 Jun 2008, Holger Hoffstaette wrote:
> How do large databases treat mass updates? AFAIK both DB2 and Oracle use
> MVCC (maybe a different kind?) as well
An intro to the other approaches used by Oracle and DB2 (not MVCC) is at
(a URL which I really need to shorten one day).
> Are there no options (algorithms) for adaptively choosing different
> update strategies that do not incur the full MVCC overhead?
If you stare at the big picture of PostgreSQL's design, you might notice
that it usually aims to do things one way and get that implementation
right for the database's intended audience. That intended audience cares
about data integrity and correctness and is willing to suffer the overhead
that goes along with operating that way. There's few "I don't care about
reliability here so long as it's fast" switches you can flip, and not
having duplicate code paths to support them helps keep the code simpler
and therefore more reliable.
This whole area is one of those good/fast/cheap trios. If you want good
transaction guarantees on updates, you either get the hardware and
settings right to handle that (!cheap), or it's slow. The idea of
providing a !good/fast/cheap option for updates might have some
theoretical value, but I think you'd find it hard to get enough support
for that idea to get work done on it compared to the other things
developer time is being spent on right now.
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
In response to
pgsql-performance by date
|Next:||From: Greg Smith||Date: 2008-06-26 22:20:15|
|Subject: Re: ??: Postgresql update op is very very slow|
|Previous:||From: Josh Berkus||Date: 2008-06-26 21:41:11|
|Subject: Re: Federated Postgresql architecture ?|