Re: Why so slow?

From: "Bealach-na Bo" <bealach_na_bo(at)hotmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Why so slow?
Date: 2006-04-28 17:31:30
Message-ID: BAY101-F1871B68D4D2782C7F21732ADB20@phx.gbl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> > INFO: index "job_log_id_pkey" now contains 10496152 row versions in
> > 59665 pages
>
>See the 10496152 above? That means you have 10496152 rows of data in your
>table. If those, only 365000 are alive. That means you have basically
>never vacuumed this table before, correct?

Almost correct :| I have vacuumed this table monthly (obviously not nearly
enough), but it
is basically a log of events of which there are a very large number of each
day.

>
>Every update or delete creates a new dead row. count(*) scans the whole
>table, dead rows included. That's why it takes so long, the table acts as
>though it has 10496152 rows when doing sequential scans.

Oh! This explains my problems.

>
>Do a VACCUM FULL on it or CLUSTER it on on a index, both of which will
>empty
>out all the free space and make it behave as it should. Note; VACUUM FULL
>will take quite a while and requires an exclusive lock on the table.
>CLUSTER also requires an exclusive lock but should be a lot faster for this
>table.
>
>Oh, and get autovacuum setup and working, posthaste.

The exclusive lock is going to cause problems for me since the table is very
active. Is there a way of getting around that or do I need to schedule the
application that accesses this table?

I'm running version 8.0. Is there autovacuum for this version too?

Regards,
Bealach

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Vivek Khera 2006-04-28 17:36:45 Re: hardare config question
Previous Message Andrus 2006-04-28 16:19:33 Re: CPU usage goes to 100%, query seems to ran forever