Re: Transaction ID wraparound: problem and proposed solution

From: "Vadim Mikheev" <vmikheev(at)sectorbase(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: Transaction ID wraparound: problem and proposed solution
Date: 2000-11-05 05:59:00
Message-ID: 016601c046ed$db6819c0$b87a30d0@sectorbase.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> > So, we'll have to abort some long running transaction.
>
> Well, yes, some transaction that continues running while ~ 500 million
> other transactions come and go might give us trouble. I wasn't really
> planning to worry about that case ;-)

Agreed, I just don't like to rely on assumptions -:)

> > Required frequency of *successful* vacuum over *all* tables.
> > We would have to remember something in pg_class/pg_database
> > and somehow force vacuum over "too-long-unvacuumed-tables"
> > *automatically*.
>
> I don't think this is a problem now; in practice you couldn't possibly
> go for half a billion transactions without vacuuming, I'd think.

Why not?
And once again - assumptions are not good for transaction area.

> If your plans to eliminate regular vacuuming become reality, then this
> scheme might become less reliable, but at present I think there's plenty
> of safety margin.
>
> > If undo would be implemented then we could delete pg_log between
> > postmaster startups - startup counter is remembered in pages, so
> > seeing old startup id in a page we would know that there are only
> > long ago committed xactions (ie only visible changes) there
> > and avoid xid comparison. But ... there will be no undo in 7.1.
> > And I foresee problems with WAL based BAR implementation if we'll
> > follow proposed solution: redo restores original xmin/xmax - how
> > to "freeze" xids while restoring DB?
>
> So, we might eventually have a better answer from WAL, but not for 7.1.
> I think my idea is reasonably non-invasive and could be removed without
> much trouble once WAL offers a better way. I'd really like to have some
> answer for 7.1, though. The sort of numbers John Scott was quoting to
> me for Verizon's paging network throughput make it clear that we aren't
> going to survive at that level with a limit of 4G transactions per
> database reload. Having to vacuum everything on at least a
> 1G-transaction cycle is salable, dump/initdb/reload is not ...

Understandable. And probably we can get BAR too but require full
backup every WRAPLIMIT/2 (or better /4) transactions.

Vadim

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Vadim Mikheev 2000-11-05 06:09:16 Re: Alternative database locations are broken
Previous Message The Hermit Hacker 2000-11-05 04:13:54 Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh)