Skip site navigation (1) Skip section navigation (2)

Re: 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly

From: Guy Fraser <guy(at)incentre(dot)net>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly
Date: 2004-04-22 15:43:35
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-general
Dann Corbit wrote:

>>>A following VACCUM brings back return times to 'start' - 
>>but I cannot 
>>>run VACUUM any other minute (?). And it exactly vaccums as 
>>many tuples 
>>>as I updated.. sure thing:
>>Why not? You only have to vacuum this one table. Vacuuming it 
>>once a minute should be doable.
>Shouldn't the Database server be the entity that decides when vacuum is

How is the database supposed to know when you want to purge records?
Once a vacuum has been run, the table can not be rolled back or time

>Something is very, very strange about the whole PostgreSQL maintenance
Giving the administrator full control over database management is a good 
If you want to write a cron job, to purge records automaticaly, thats your
prerogative. Not every one needs to, nor want's to constantly purge records.

Most of my databases collect information and changing information in them
would be taboo. Since records are not updated or deleted their is no reason
to vacuum the collection tables, and they collect between 400 K to 40 M
records per period.

>Oracle uses MVCC and I do not have to UPDATE STATISTICS constantly to
>keep the system from going into the toilet.
Does Oracle purge records automaticaly?
If so how do you configure it, and what are the default parameters?

>Also, I should be able to do an update on every row in a database table
>without causing severe problems.  Every other database system I know of
>does not have this problem.
>If I have a million row table with a column called is_current, and I do
>UPDATE tname SET is_current = 0;
>Horrible things happen.
>Just an idea:
>Why not recognize that more rows will be modified than the row setting
>can support and actually break the command into batches internally?
It sounds like you have significant hardware limitations.

I have a database I use for traffic analysys, that has over 40,000,000 
records, I have
done some complicated queries with multiple subselects and joins. The 
queries take a long time to complete, but they work. I have also done 
updates that
affected at least 5% of the records, then vacuumed the table shortly 
there after.

The bigger the table the more "scatch pad" disk space, and memory you need.

In response to


pgsql-general by date

Next:From: John Sidney-WoollettDate: 2004-04-22 15:51:11
Subject: Re: Missing OID rant
Previous:From: Karsten HilbertDate: 2004-04-22 15:41:45
Subject: Re: Unicode + LC_COLLATE

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group