> -----Original Message-----
> From: Bruno Wolff III [mailto:bruno(at)wolff(dot)to]
> Sent: Wednesday, April 21, 2004 11:19 AM
> To: Philipp Buehler
> Cc: pgsql-general(at)postgresql(dot)org
> Subject: Re: [GENERAL] 7.3.4 on Linux: UPDATE .. foo=foo+1
> degrades massivly over time
> On Wed, Apr 21, 2004 at 19:52:15 +0200,
> Philipp Buehler <pb-pgsql-g(at)mlsub(dot)buehler(dot)net> wrote:
> > While running
> > UPDATE banner SET counterhalf=counterhalf+1 WHERE
> BannerID=50 several
> > thousand times, the return times degrade (somewhat linear).
> This is to be expected. Postgres uses MVCC and everytime you
> do an update a new row is created.
> > A following VACCUM brings back return times to 'start' -
> but I cannot
> > run VACUUM any other minute (?). And it exactly vaccums as
> many tuples
> > as I updated.. sure thing:
> Why not? You only have to vacuum this one table. Vacuuming it
> once a minute should be doable.
Shouldn't the Database server be the entity that decides when vacuum is
Something is very, very strange about the whole PostgreSQL maintenance
Oracle uses MVCC and I do not have to UPDATE STATISTICS constantly to
keep the system from going into the toilet.
Also, I should be able to do an update on every row in a database table
without causing severe problems. Every other database system I know of
does not have this problem.
If I have a million row table with a column called is_current, and I do
UPDATE tname SET is_current = 0;
Horrible things happen.
Just an idea:
Why not recognize that more rows will be modified than the row setting
can support and actually break the command into batches internally?
pgsql-general by date
|Next:||From: Greg Sabino Mullane||Date: 2004-04-21 22:01:38|
|Subject: Re: Is there a newer version of DBD::Pg?|
|Previous:||From: Jord Tanner||Date: 2004-04-21 21:43:58|
|Subject: Re: kill -2|