Re: Per tuple overhead, cmin, cmax, OID

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Manfred Koizar <mkoi-pg(at)aon(dot)at>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Per tuple overhead, cmin, cmax, OID
Date: 2002-05-21 15:53:04
Message-ID: 21197.1021996384@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Manfred Koizar <mkoi-pg(at)aon(dot)at> writes:
> That was one of the possible solutions I thought of, unfortunately the
> one I'm most afraid of. Not because I think it's not the cleanest
> way, but I don't (yet) feel comfortable enough with the code to rip
> out oids from system tables.

The system tables that have OIDs will certainly continue to have OIDs.

I suppose the messiest aspect of that solution would be changing all
the places that currently do "tuple->t_data->t_oid". If OID is not at
a fixed offset in the tuple then it'll be necessary to change *all*
those places. Ugh. While certainly we should have been using accessor
macros for that, I'm not sure I want to try to change it.

> Other possible implementations would leave the oid in the tuple
> header:

> . typedef two structs HeapTupleHeaderDataWithOid and
> HeapTupleHeaderDataWithoutOid, wrap access to *all* HeapTupleHeader
> fields in accessor macros/functions, give these accessors enough
> information to know which variant to use.

If OID is made to be the last fixed-offset field, instead of the first,
then this approach would be fairly workable. Actually I'd still use
just one struct definition, but do offsetof() calculations to decide
where the null-bitmap starts.

> Decouple on-disk format from in-memory structures, use
> HeapTupleHeaderPack() and HeapTupleHeaderUnpack() to store/extract
> header data to/from disk buffers. Concurrency?

Inefficient, and you'd have problems still with the changeable fields
(t_infomask etc).

>> As usual, the major objection to any such change is losing the chance
>> of doing pg_upgrade. But we didn't have pg_upgrade during the 7.2
>> cycle either.

> I thought, it is quite common to need pg_dump/restore when upgrading
> between releases.

Yes, and we get loud complaints every time we require it...

> Anyway, as long as our changes don't make heap tuples larger, it
> should be possible to write a tool that converts version x data files
> to version y data files. I've done that before (not for PG though)
> and I know it's a lot of work, but wouldn't it be great for the PG
> marketing department ;-)

I'd be afraid to use a conversion-in-place tool for this sort of thing.
If it crashes halfway through, what then?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2002-05-21 16:05:49 Re: [SQL] Bug with Daylight Savings Time & Interval
Previous Message Joel Burton 2002-05-21 15:41:26 Is 7.3 a good time to increase NAMEDATALEN ?