Skip site navigation (1) Skip section navigation (2)

Re: Database size Vs performance degradation

From: Miernik <public(at)public(dot)miernik(dot)name>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Database size Vs performance degradation
Date: 2008-07-30 21:58:34
Message-ID: 20080730215834.5323.0.NOFFLE@turbacz.local (view raw or flat)
Thread:
Lists: pgsql-performance
Valentin Bogdanov <valiouk(at)yahoo(dot)co(dot)uk> wrote:
> I am guessing that you are using DELETE to remove the 75,000
> unimportant.  Change your batch job to CREATE a new table consisting
> only of the 5,000 important. You can use "CREATE TABLE table_name AS
> select_statement" command. Then drop the old table. After that you can
> use ALTER TABLE to change the name of the new table to that of the old
> one.

I have a similar, but different situation, where I TRUNCATE a table with
60k rows every hour, and refill it with new rows. Would it be better
(concerning bloat) to just DROP the table every hour, and recreate it,
then to TRUNCATE it? Or does TRUNCATE take care of the boat as good as a
DROP and CREATE?

I am running 8.3.3 in a 48 MB RAM Xen, so performance matters much.

-- 
Miernik
http://miernik.name/


In response to

Responses

pgsql-performance by date

Next:From: MiernikDate: 2008-07-30 22:11:58
Subject: what is less resource-intensive, WHERE id IN or INNER JOIN?
Previous:From: Dennis BrakhaneDate: 2008-07-30 18:34:16
Subject: Re: how does pg handle concurrent queries and same queries

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group