Skip site navigation (1) Skip section navigation (2)

Re: pg_autovacuum not having enough suction ?

From: "Otto Blomqvist" <o(dot)blomqvist(at)secomintl(dot)com>
To: "Matthew T(dot) O'Connor" <matthew(at)zeut(dot)net>
Cc: <pgsql-performance(at)postgresql(dot)org>,"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: pg_autovacuum not having enough suction ?
Date: 2005-03-25 18:29:30
Message-ID: 00f201c53168$96903480$e602a8c0@blomqvist (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-performance
It looks like the reltuples-values are screwed up. Even though rows are
constantly being removed from the table the reltuples keep going up. If I
understand correctly that also makes the Vacuum threshold go up and we end
up in a vicious circle. Right after pg_autovacuum performed a vacuum analyze
on the table it actually had 31000 records, but reltuples reports over 100k.
I'm not sure if this means anything But i thought i would pass it along.

PG version 8.0.0, 31MB tarred DB.

[2005-03-25 09:16:14 EST] INFO:    dbname: testing
[2005-03-25 09:16:14 EST] INFO:      oid: 9383816
[2005-03-25 09:16:14 EST] INFO:      username: (null)
[2005-03-25 09:16:14 EST] INFO:      password: (null)
[2005-03-25 09:16:14 EST] INFO:      conn is null, (not connected)
[2005-03-25 09:16:14 EST] INFO:      default_analyze_threshold: 1000
[2005-03-25 09:16:14 EST] INFO:      default_vacuum_threshold: 500


[2005-03-25 09:05:12 EST] INFO:      table name: secom."public"."file_92"
[2005-03-25 09:05:12 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-25 09:05:12 EST] INFO:         reltuples: 49185.000000;  relpages:
8423
[2005-03-25 09:05:12 EST] INFO:         curr_analyze_count: 919274;
curr_vacuum_count: 658176
[2005-03-25 09:05:12 EST] INFO:         last_analyze_count: 899272;
last_vacuum_count: 560541
[2005-03-25 09:05:12 EST] INFO:         analyze_threshold: 49685;
vacuum_threshold: 100674


[2005-03-25 09:10:12 EST] DEBUG:   Performing: VACUUM ANALYZE
"public"."file_92"
[2005-03-25 09:10:33 EST] INFO:      table name: secom."public"."file_92"
[2005-03-25 09:10:33 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-25 09:10:33 EST] INFO:         reltuples: 113082.000000;  relpages:
6624
[2005-03-25 09:10:33 EST] INFO:         curr_analyze_count: 923820;
curr_vacuum_count: 662699
[2005-03-25 09:10:33 EST] INFO:         last_analyze_count: 923820;
last_vacuum_count: 662699
[2005-03-25 09:10:33 EST] INFO:         analyze_threshold: 113582;
vacuum_threshold: 227164


[2005-03-25 09:16:14 EST] INFO:      table name: secom."public"."file_92"
[2005-03-25 09:16:14 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-25 09:16:14 EST] INFO:         reltuples: 113082.000000;  relpages:
6624  <-- Actually has 31k rows
[2005-03-25 09:16:14 EST] INFO:         curr_analyze_count: 923820;
curr_vacuum_count: 662699
[2005-03-25 09:16:14 EST] INFO:         last_analyze_count: 923820;
last_vacuum_count: 662699
[2005-03-25 09:16:14 EST] INFO:         analyze_threshold: 113582;
vacuum_threshold: 227164

DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
shared memory.




----- Original Message -----
From: "Matthew T. O'Connor" <matthew(at)zeut(dot)net>
To: "Otto Blomqvist" <o(dot)blomqvist(at)secomintl(dot)com>;
<pgsql-performance(at)postgresql(dot)org>
Sent: Thursday, March 24, 2005 3:58 PM
Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?


> I would rather keep this on list since other people can chime in.
>
> Otto Blomqvist wrote:
>
> >It does not seem to be a Stats collector problem.
> >
> >   oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >schemaname | n_tup_ins | n_tup_upd | n_tup_del
>
>---------+---------+--------------+----------+-------------+-----------+---
-
> >--------+-----------+-----------+-----------
> > 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >public     |    158176 |    318527 |    158176
> >(1 row)
> >
> >I insert 50000 records
> >
> >secom=# select createfile_92records(1, 50000);    <--- this is a pg
script
> >that inserts records  1 threw 50000.
> > createfile_92records
> >----------------------
> >                    0
> >
> >
> >   oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >schemaname | n_tup_ins | n_tup_upd | n_tup_del
>
>---------+---------+--------------+----------+-------------+-----------+---
-
> >--------+-----------+-----------+-----------
> > 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >public     |    208179 |    318932 |    158377
> >(1 row)
> >
> >reltuples does not change ? Hmm. n_tup_ins looks fine.
> >
> >
>
> That is expected, reltuples only gets updated by a vacuum or an analyze.
>
> >This table is basically a queue full of records waiting to get transfered
> >over from our 68030 system to the PG database. The records are then moved
> >into folders (using a trigger) like file_92_myy depending on what month
the
> >record was created on the 68030. During normal operations there should
not
> >be more than 10 records at a time in the table, although during the
course
> >of a day a normal system will get about 50k records. I create 50000
records
> >to simulate incoming traffic, since we don't have much traffic in the
test
> >lab.
> >
> >After a few hours we have
> >
> >secom=# select count(*) from file_92;
> > count
> >-------
> > 42072
> >
> >So we have sent over approx 8000 Records.
> >
> >   oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >schemaname | n_tup_ins | n_tup_upd | n_tup_del
>
>---------+---------+--------------+----------+-------------+-----------+---
-
> >--------+-----------+-----------+-----------
> > 9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >public     |    208218 |    334521 |    166152
> >(1 row)
> >
> >
> >n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close.   (Each record
> >gets updated twice, then moved)
> >n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are also
> >minor background traffic going on)
> >
> >
> >I could send over the full vacuum verbose capture as well as the
autovacuum
> >capture if that is of interest.
> >
>
> That might be helpful.  I don't see a stats system problem here, but I
> also haven't heard of any autovac problems recently, so this might be
> something new.
>
> Thanks,
>
> Matthew O'Connor
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>


In response to

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2005-03-25 18:47:22
Subject: Re: Delete query takes exorbitant amount of time
Previous:From: Simon RiggsDate: 2005-03-25 18:24:10
Subject: Re: Delete query takes exorbitant amount of time

pgsql-hackers by date

Next:From: Tom LaneDate: 2005-03-25 18:38:39
Subject: Re: pg_dump issue : Cannot drop a non-existent(?) trigger
Previous:From: Devrim GUNDUZDate: 2005-03-25 18:18:56
Subject: Re: pg_dump issue : Cannot drop a non-existent(?) trigger

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group