Ühel kenal päeval, R, 2006-07-28 kell 12:38, kirjutas Jim C. Nasby:
> On Fri, Jul 28, 2006 at 03:08:08AM +0300, Hannu Krosing wrote:
> > > The other POV is that we don't really care about long-running
> > > transaction in other databases unless they are lazy vacuum, a case which
> > > is appropiately covered by the patch as it currently stands. This seems
> > > to be the POV that Hannu takes: the only long-running transactions he
> > > cares about are lazy vacuums.
> > Yes. The original target audience of this patch are users running 24/7
> > OLTP databases with big slow changing tables and small fast-changing
> > tables which need to stay small even at the time when the big ones are
> > vacuumed.
> > The other possible transactions which _could_ possibly be ignored while
> > VACUUMING are those from ANALYSE and non-lazy VACUUMs.
> There are other transactions to consider: user transactions that will
> run a long time, but only hit a limited number of relations. These are
> as big a problem in an OLTP environment as vacuum is.
These transactions are better kept out of an OLTP database, by their
nature they belong to OLAP db :)
The reason I addressed the VACUUM first, was the fact that you can't
avoid VACUUM on OLTP db.
> Rather than coming up with machinery that will special-case vacuum or
> pg_dump, etc., I'd suggest thinking about a generic framework that would
> work for any long-runnnig transaction.
So instead of actually *solving* one problem you suggest *thinking*
about solving the general case ?
We have been *thinking* about dead-space-map for at least three years by
> One possibility:
> Transaction flags itself as 'long-running' and provides a list of
> exactly what relations it will be touching.
> That list is stored someplace a future vacuum can get at.
> The transaction runs, with additional checks that ensure it will not
> touch any relations that aren't in the list it provided.
I have thought abou that too, but checking on each data change seemed
too expensive to me, at least for the first cut.
There seems to be some ways to avoid actual checking for table-in-list,
but you still have to check weather you have to check .
> Any vacuums that start will take into account these lists of relations
> from long-running transactions and build a list of XIDs that have
> provided a list, and the minimum XID for every relation that was listed.
> If vacuum wants to vacuum a relation that has been listed as part of a
> long-running transaction, it will use the oldest XID in the
> database/cluster or the oldest XID listed for that relation, whichever
> is older. If it wants to vacuum a relation that is not listed, it will
> use the oldest XID in the database/cluster, excluding those XIDs that
> have listed exactly what relations they will be looking at.
> That scheme won't help pg_dump... in order to do so, you'd need to allow
> transactions to drop relations from their list.
The whole thing is probably doable, but I doubt it will be done before
8.2 (or even 8.5, considering that I had the first vacuum-ignore-vacuum
patch ready by 8.0 (i think))
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia
Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com
In response to
pgsql-hackers by date
|Next:||From: Peter Eisentraut||Date: 2006-07-28 22:09:27|
|Subject: DTrace enabled build fails|
|Previous:||From: Luke Lonergan||Date: 2006-07-28 21:43:23|
|Subject: Re: On-disk bitmap index patch|
pgsql-patches by date
|Next:||From: Jim Nasby||Date: 2006-07-28 22:43:03|
|Subject: Re: The vacuum-ignore-vacuum patch|
|Previous:||From: Marko Kreen||Date: 2006-07-28 21:13:52|
|Subject: Re: [HACKERS] [PATCH] Provide 8-byte transaction IDs to user level|