Re: [HACKERS] vacuum process size

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Brian E Gallew <geek+(at)cmu(dot)edu>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] vacuum process size
Date: 1999-08-24 20:51:58
Message-ID: 3805.935527918@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Brian E Gallew <geek+(at)cmu(dot)edu> writes:
> Question: is there reliable information in pg_statistics (or other
> system tables) which can be used to make a reasonable estimate for the
> sizes of these structures before initial allocation? Certainly the
> file size can be gotten from a stat (some portability issues, sparse
> file issues).

pg_statistics would tell you what was found out by the last vacuum on
the table, if there ever was one. Dunno how reliable you want to
consider that to be. stat() would provide up-to-date info, but the
problem with it is that the total file size might be a drastic
overestimate of the number of pages that vacuum needs to put in these
lists. There's not really much chance of getting a useful estimate from
the last vacuum run, either. AFAICT what we are interested in is the
number of pages containing dead tuples, and by definition all of those
tuples will have died since the last vacuum...

On the whole, just fixing the memory management seems like the best bet.
We know how to do that, and it may benefit other things besides vacuum.

regards, tom lane

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 1999-08-24 21:14:12 Re: [HACKERS] Sorting costs (was Caution: tonight's commits force initdb)
Previous Message Tom Lane 1999-08-24 20:36:04 Re: memory requirements question