Re: [HACKERS] vacuum process size

From: Brian E Gallew <geek+(at)cmu(dot)edu>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] vacuum process size
Date: 1999-08-24 17:01:12
Message-ID: emacs-smtp-447-14274-53208-353182@export.andrew.cmu.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Then <tgl(at)sss(dot)pgh(dot)pa(dot)us> spoke up and said:
> So doubling the array size at each step is a good change.
>
> But there are a lot more tuples than pages in most relations.
>
> I see two lists with per-tuple data in vacuum.c, "vtlinks" in
> vc_scanheap and "vtmove" in vc_rpfheap, that are both being grown with
> essentially the same technique of repalloc() after every N entries.
> I'm not entirely clear on how many tuples get put into each of these
> lists, but it sure seems like in ordinary circumstances they'd be much
> bigger space hogs than any of the three VPageList lists.
>
> I recommend going to a doubling approach for each of these lists as
> well as for VPageList.

Question: is there reliable information in pg_statistics (or other
system tables) which can be used to make a reasonable estimate for the
sizes of these structures before initial allocation? Certainly the
file size can be gotten from a stat (some portability issues, sparse
file issues).

--
=====================================================================
| JAVA must have been developed in the wilds of West Virginia. |
| After all, why else would it support only single inheritance?? |
=====================================================================
| Finger geek(at)cmu(dot)edu for my public key. |
=====================================================================

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Hannu Krosing 1999-08-24 19:10:41 Re: [HACKERS] Sorting costs (was Caution: tonight's commits force initdb)
Previous Message Tom Lane 1999-08-24 16:20:22 Re: [HACKERS] vacuum process size