Re: Re: [HACKERS] Re: [QUESTIONS] Business cases

From: Mattias Kregert <matti(at)algonet(dot)se>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Re: [HACKERS] Re: [QUESTIONS] Business cases
Date: 1998-01-19 12:59:56
Message-ID: 34C34E4C.3440D5B0@algonet.se
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom wrote:
> > > How are large users handling the vacuum problem? vaccuum locks other
> > > users out of tables too long. I don't need a lot performance (a few per
> > > minutes), but I need to be handle queries non-stop).
> >
> > Not sure, but this one is about the only major thing that is continuing
> > to bother me :( Is there any method of improving this?
>
> vacuum seems to do a _lot_ of stuff. It seems that crash recovery
> features, and maintenance features should be separated. I believe the
> only required maintenance features are recovering space used by deleted
> tuples and updating stats? Both of these shouldn't need to lock the
> database for long periods of time.

Would it be possible to add an option to VACUUM, like a max number
of blocks to sweep? Or is this impossible because of the way PG works?

Would it be possible to (for example) compact data from the front of
the file to make one block free somewhere near the beginning of the
file and then move rows from the last block to this new, empty block?

-- To limit the number of rows to compact:
psql=> VACUUM MoveMax 1000; -- move max 1000 rows

-- To limit the time used for vacuuming:
psql=> VACUUM MaxSweep 1000; -- Sweep max 1000 blocks

Could this work with the current method of updating statistics?

*** Btw, why doesn't PG update statistics when inserting/updating?

/* m */

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jan Wieck 1998-01-19 13:18:01 Re: [HACKERS] *Major* Patch for PL
Previous Message Jan Wieck 1998-01-19 11:12:11 Re: [HACKERS] *Major* Patch for PL