|From:||Zdenek Kotala <Zdenek(dot)Kotala(at)Sun(dot)COM>|
|To:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|Cc:||Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Gregory Stark <stark(at)enterprisedb(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: PG_PAGE_LAYOUT_VERSION 5 - time for change|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Tom Lane napsal(a):
> Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> writes:
>> Hmm, you're right. I think it can be made to work by storing the *end*
>> offset of each chunk. To find the chunk containing offset X, search for
>> the first chunk with end_offset > X.
> Yeah, that seems like it would work, and it would disentangle us
> altogether from needing a hard-wired chunk size. The only downside is
> that it'd be a pain to convert in-place. However, if we are also going
> to add identifying information to the toast chunks (like the owning
> column's number or datatype), then you could tell whether a toast chunk
> had been converted by checking t_natts. So in principle a toast table
> could be converted a page at a time. If the converted data didn't fit
> you could push one of the chunks out to some new page of the file.
Yeah it was, main intention. Problem is toast index, but It is common problem
not only for toast tables.
> On the whole I like this a lot better than Zdenek's original proposal
> which didn't seem to me to solve much of anything.
Agree. This approach is much better. It add more complexity now for converting
chunk from old to the new version. But it add a benefit - for example vacuum can
remove data from dropped columns and so on.
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql
|Next Message||Heikki Linnakangas||2008-11-02 20:25:51||Re: [COMMITTERS] pgsql: Unite ReadBufferWithFork, ReadBufferWithStrategy, and|
|Previous Message||David Rowley||2008-11-02 14:09:00||Re: Windowing Function Patch Review -> Performance Comparison.|