Skip site navigation (1) Skip section navigation (2)

Re: pg_autovacuum not having enough suction ?

From: "Otto Blomqvist" <o(dot)blomqvist(at)secomintl(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: pg_autovacuum not having enough suction ?
Date: 2005-03-25 20:24:51
Message-ID: d21rqr$2puv$1@news.hub.org (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-performance
> Was reltuples = 113082 correct right after the vacuum?

No, There where about 31000 rows after the vacuum. I'm no expert but tuples
= rows, right ?

This is not a "normal" table though, in the sence that it is only a
temporary holding ground as I explained earlier. I create 50000 records and
these get sent over from our custom 68030 system, to tables like
file_92_myy, depending on the date of the record.  A pl/pgsql script is used
as a trigger to move the records after they get data from the 68030. Don't
know if that is of interest or not. I could post the trigger if you'd like.


""Matthew T. O'Connor"" <matthew(at)zeut(dot)net> wrote in message
news:42446A66(dot)6010504(at)zeut(dot)net(dot)(dot)(dot)
> hmm.... the value in reltuples should be accurate after a vacuum (or
> vacuum analyze) if it's not it's a vacuum bug or something is going on
> that isn't understood.  If you or pg_autovacuum are running plain
> analyze commands, that could explain the invalid reltules numbers.
>
> Was reltuples = 113082 correct right after the vacuum?
>
> Matthew
>
>
> Otto Blomqvist wrote:
>
> >It looks like the reltuples-values are screwed up. Even though rows are
> >constantly being removed from the table the reltuples keep going up. If I
> >understand correctly that also makes the Vacuum threshold go up and we
end
> >up in a vicious circle. Right after pg_autovacuum performed a vacuum
analyze
> >on the table it actually had 31000 records, but reltuples reports over
100k.
> >I'm not sure if this means anything But i thought i would pass it along.
> >
> >PG version 8.0.0, 31MB tarred DB.
> >
> >[2005-03-25 09:16:14 EST] INFO:    dbname: testing
> >[2005-03-25 09:16:14 EST] INFO:      oid: 9383816
> >[2005-03-25 09:16:14 EST] INFO:      username: (null)
> >[2005-03-25 09:16:14 EST] INFO:      password: (null)
> >[2005-03-25 09:16:14 EST] INFO:      conn is null, (not connected)
> >[2005-03-25 09:16:14 EST] INFO:      default_analyze_threshold: 1000
> >[2005-03-25 09:16:14 EST] INFO:      default_vacuum_threshold: 500
> >
> >
> >[2005-03-25 09:05:12 EST] INFO:      table name: secom."public"."file_92"
> >[2005-03-25 09:05:12 EST] INFO:         relid: 9384219;   relisshared: 0
> >[2005-03-25 09:05:12 EST] INFO:         reltuples: 49185.000000;
relpages:
> >8423
> >[2005-03-25 09:05:12 EST] INFO:         curr_analyze_count: 919274;
> >curr_vacuum_count: 658176
> >[2005-03-25 09:05:12 EST] INFO:         last_analyze_count: 899272;
> >last_vacuum_count: 560541
> >[2005-03-25 09:05:12 EST] INFO:         analyze_threshold: 49685;
> >vacuum_threshold: 100674
> >
> >
> >[2005-03-25 09:10:12 EST] DEBUG:   Performing: VACUUM ANALYZE
> >"public"."file_92"
> >[2005-03-25 09:10:33 EST] INFO:      table name: secom."public"."file_92"
> >[2005-03-25 09:10:33 EST] INFO:         relid: 9384219;   relisshared: 0
> >[2005-03-25 09:10:33 EST] INFO:         reltuples: 113082.000000;
relpages:
> >6624
> >[2005-03-25 09:10:33 EST] INFO:         curr_analyze_count: 923820;
> >curr_vacuum_count: 662699
> >[2005-03-25 09:10:33 EST] INFO:         last_analyze_count: 923820;
> >last_vacuum_count: 662699
> >[2005-03-25 09:10:33 EST] INFO:         analyze_threshold: 113582;
> >vacuum_threshold: 227164
> >
> >
> >[2005-03-25 09:16:14 EST] INFO:      table name: secom."public"."file_92"
> >[2005-03-25 09:16:14 EST] INFO:         relid: 9384219;   relisshared: 0
> >[2005-03-25 09:16:14 EST] INFO:         reltuples: 113082.000000;
relpages:
> >6624  <-- Actually has 31k rows
> >[2005-03-25 09:16:14 EST] INFO:         curr_analyze_count: 923820;
> >curr_vacuum_count: 662699
> >[2005-03-25 09:16:14 EST] INFO:         last_analyze_count: 923820;
> >last_vacuum_count: 662699
> >[2005-03-25 09:16:14 EST] INFO:         analyze_threshold: 113582;
> >vacuum_threshold: 227164
> >
> >DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
> >shared memory.
> >
> >
> >
> >
> >----- Original Message -----
> >From: "Matthew T. O'Connor" <matthew(at)zeut(dot)net>
> >To: "Otto Blomqvist" <o(dot)blomqvist(at)secomintl(dot)com>;
> ><pgsql-performance(at)postgresql(dot)org>
> >Sent: Thursday, March 24, 2005 3:58 PM
> >Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?
> >
> >
> >
> >
> >>I would rather keep this on list since other people can chime in.
> >>
> >>Otto Blomqvist wrote:
> >>
> >>
> >>
> >>>It does not seem to be a Stats collector problem.
> >>>
> >>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
> >>>
> >>>
>
>>---------+---------+--------------+----------+-------------+-----------+--
-
> >>
> >>
> >-
> >
> >
> >>>--------+-----------+-----------+-----------
> >>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >>>public     |    158176 |    318527 |    158176
> >>>(1 row)
> >>>
> >>>I insert 50000 records
> >>>
> >>>secom=# select createfile_92records(1, 50000);    <--- this is a pg
> >>>
> >>>
> >script
> >
> >
> >>>that inserts records  1 threw 50000.
> >>>createfile_92records
> >>>----------------------
> >>>                   0
> >>>
> >>>
> >>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
> >>>
> >>>
>
>>---------+---------+--------------+----------+-------------+-----------+--
-
> >>
> >>
> >-
> >
> >
> >>>--------+-----------+-----------+-----------
> >>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >>>public     |    208179 |    318932 |    158377
> >>>(1 row)
> >>>
> >>>reltuples does not change ? Hmm. n_tup_ins looks fine.
> >>>
> >>>
> >>>
> >>>
> >>That is expected, reltuples only gets updated by a vacuum or an analyze.
> >>
> >>
> >>
> >>>This table is basically a queue full of records waiting to get
transfered
> >>>over from our 68030 system to the PG database. The records are then
moved
> >>>into folders (using a trigger) like file_92_myy depending on what month
> >>>
> >>>
> >the
> >
> >
> >>>record was created on the 68030. During normal operations there should
> >>>
> >>>
> >not
> >
> >
> >>>be more than 10 records at a time in the table, although during the
> >>>
> >>>
> >course
> >
> >
> >>>of a day a normal system will get about 50k records. I create 50000
> >>>
> >>>
> >records
> >
> >
> >>>to simulate incoming traffic, since we don't have much traffic in the
> >>>
> >>>
> >test
> >
> >
> >>>lab.
> >>>
> >>>After a few hours we have
> >>>
> >>>secom=# select count(*) from file_92;
> >>>count
> >>>-------
> >>>42072
> >>>
> >>>So we have sent over approx 8000 Records.
> >>>
> >>>  oid   | relname | relnamespace | relpages | relisshared | reltuples |
> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del
> >>>
> >>>
>
>>---------+---------+--------------+----------+-------------+-----------+--
-
> >>
> >>
> >-
> >
> >
> >>>--------+-----------+-----------+-----------
> >>>9384219 | file_92 |         2200 |     8423 | f           |     49837 |
> >>>public     |    208218 |    334521 |    166152
> >>>(1 row)
> >>>
> >>>
> >>>n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close.   (Each
record
> >>>gets updated twice, then moved)
> >>>n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are
also
> >>>minor background traffic going on)
> >>>
> >>>
> >>>I could send over the full vacuum verbose capture as well as the
> >>>
> >>>
> >autovacuum
> >
> >
> >>>capture if that is of interest.
> >>>
> >>>
> >>>
> >>That might be helpful.  I don't see a stats system problem here, but I
> >>also haven't heard of any autovac problems recently, so this might be
> >>something new.
> >>
> >>Thanks,
> >>
> >>Matthew O'Connor
> >>
> >>
> >>
> >>---------------------------(end of broadcast)---------------------------
> >>TIP 6: Have you searched our list archives?
> >>
> >>               http://archives.postgresql.org
> >>
> >>
> >>
> >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 8: explain analyze is your friend
> >
> >
> >
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if your
>       joining column's datatypes do not match
>



In response to

pgsql-performance by date

Next:From: Simon RiggsDate: 2005-03-25 20:28:49
Subject: Re: Delete query takes exorbitant amount of time
Previous:From: Tom LaneDate: 2005-03-25 20:22:49
Subject: lazy_update_relstats considered harmful (was Re: [PERFORM] pg_autovacuum not having enough suction ?)

pgsql-hackers by date

Next:From: Tom LaneDate: 2005-03-25 20:33:44
Subject: Re: Bug 1500
Previous:From: Tom LaneDate: 2005-03-25 20:22:49
Subject: lazy_update_relstats considered harmful (was Re: [PERFORM] pg_autovacuum not having enough suction ?)

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group