Re: Transaction ID wraparound: problem and proposed solution

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Mikheev, Vadim" <vmikheev(at)SECTORBASE(dot)COM>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Transaction ID wraparound: problem and proposed solution
Date: 2000-11-04 01:12:20
Message-ID: 8774.973300340@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Mikheev, Vadim" <vmikheev(at)SECTORBASE(dot)COM> writes:
> So, we'll have to abort some long running transaction.

Well, yes, some transaction that continues running while ~ 500 million
other transactions come and go might give us trouble. I wasn't really
planning to worry about that case ;-)

> Required frequency of *successful* vacuum over *all* tables.
> We would have to remember something in pg_class/pg_database
> and somehow force vacuum over "too-long-unvacuumed-tables"
> *automatically*.

I don't think this is a problem now; in practice you couldn't possibly
go for half a billion transactions without vacuuming, I'd think.

If your plans to eliminate regular vacuuming become reality, then this
scheme might become less reliable, but at present I think there's plenty
of safety margin.

> If undo would be implemented then we could delete pg_log between
> postmaster startups - startup counter is remembered in pages, so
> seeing old startup id in a page we would know that there are only
> long ago committed xactions (ie only visible changes) there
> and avoid xid comparison. But ... there will be no undo in 7.1.
> And I foresee problems with WAL based BAR implementation if we'll
> follow proposed solution: redo restores original xmin/xmax - how
> to "freeze" xids while restoring DB?

So, we might eventually have a better answer from WAL, but not for 7.1.

I think my idea is reasonably non-invasive and could be removed without
much trouble once WAL offers a better way. I'd really like to have some
answer for 7.1, though. The sort of numbers John Scott was quoting to
me for Verizon's paging network throughput make it clear that we aren't
going to survive at that level with a limit of 4G transactions per
database reload. Having to vacuum everything on at least a
1G-transaction cycle is salable, dump/initdb/reload is not ...

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2000-11-04 01:16:10 Re: problems with configure
Previous Message Alfred Perlstein 2000-11-04 00:51:15 Re: Alpha FreeBSD port of PostgreSQL !!!