Re: VERY slow after many updates

From: Csaba Nagy <nagy(at)ecircle-ag(dot)com>
To: Alex Wang <alex(at)alexwang(dot)com>
Cc: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: VERY slow after many updates
Date: 2005-11-19 11:12:12
Message-ID: 1132398732.10890.480.camel@coppola.muc.ecircle.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Alex,

I suppose the table is a kind of 'queue' table, where you
insert/get/delete continuously, and the life of the records is short.
Considering that in postgres a delete will still leave you the record in
the table's file and in the indexes, just mark it as dead, your table's
actual size can grow quite a lot even if the number of live records will
stay small (you will have a lot of dead tuples, the more tasks
processed, the more dead tuples). So I guess you should vacuum this
table very often, so that the dead tuples are reused. I'm not an expert
on this, but it might be good to vacuum after each n deletions, where n
is ~ half the average size of the queue you expect to have. From time to
time you might want to do a vacuum full on it and a reindex.

Right now I guess a vacuum full + reindex will help you. I think it's
best to do:

vacuum download_queue;
vacuum full download_queue;
reindex download_queue;

I think the non-full vacuum which is less obtrusive than the full one
will do at least some of the work and it will bring all needed things in
FS cache, so the full vacuum to be as fast as possible (vacuum full
locks exclusively the table). At least I do it this way with good
results for small queue-like tables...

BTW, I wonder if the download_queue_user_index index is helping you at
all on that table ? Do you expect it to grow bigger than 1000 ?
Otherwise it has no point to index it.

HTH,
Csaba.

On Sat, 2005-11-19 at 08:46, Alex Wang wrote:
> I am using PostgreSQL in an embedded system which has only 32 or 64 MB RAM
> (run on PPC 266 MHz or ARM 266MHz CPU). I have a table to keep downlaod
> tasks. There is a daemon keep looking up the table and fork a new process to
> download data from internet.
>
> Daemon:
> . Check the table every 5 seconds
> . Fork a download process to download if there is new task
> Downlaod process (there are 5 download process max):
> . Update the download rate and downloaded size every 3 seconds.
>
> At begining, everything just fine. The speed is good. But after 24 hours,
> the speed to access database become very very slow. Even I stop all
> processes, restart PostgreSQL and use psql to select data, this speed is
> still very very slow (a SQL command takes more than 2 seconds). It is a
> small table. There are only 8 records in the table.
>
> The only way to solve it is remove all database, run initdb, create new
> database and insert new records. I tried to run vacummdb but still very
> slow.
>
> Any idea to make it faster?
>
> Thanks,
> Alex
>
> --
> Here is the table schema:
> create table download_queue (
> task_id SERIAL,
> username varchar(128),
> pid int,
> url text,
> filename varchar(1024),
> status int,
> created_time int,
> started_time int,
> total_size int8,
> current_size int8,
> current_rate int,
> CONSTRAINT download_queue_pkey PRIMARY KEY(task_id)
> );
> CREATE INDEX download_queue_user_index ON download_queue USING BTREE
> (username);
>
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alex Wang 2005-11-19 12:05:00 Re: VERY slow after many updates
Previous Message Alex Wang 2005-11-19 07:46:06 VERY slow after many updates