Re: Making all nbtree entries unique by having heap TIDs participate in comparisons

From: Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, "Andrey V(dot) Lepikhov" <a(dot)lepikhov(at)postgrespro(dot)ru>
Subject: Re: Making all nbtree entries unique by having heap TIDs participate in comparisons
Date: 2019-01-04 15:40:16
Message-ID: CAPpHfdvHG88KEnHe3cm1cD44-h2fGBTrebdKe2VDx-xx_kzVWw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi!

I'm starting to look at this patchset. Not ready to post detail
review, but have a couple of questions.

On Wed, Sep 19, 2018 at 9:24 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> I still haven't managed to add pg_upgrade support, but that's my next
> step. I am more or less happy with the substance of the patch in v5,
> and feel that I can now work backwards towards figuring out the best
> way to deal with on-disk compatibility. It shouldn't be too hard --
> most of the effort will involve coming up with a good test suite.

Yes, it shouldn't be too hard, but it seems like we have to keep two
branches of code for different handling of duplicates. Is that true?

+ * In the worst case (when a heap TID is appended) the size of the returned
+ * tuple is the size of the first right tuple plus an additional MAXALIGN()
+ * quantum. This guarantee is important, since callers need to stay under
+ * the 1/3 of a page restriction on tuple size. If this routine is ever
+ * taught to truncate within an attribute/datum, it will need to avoid
+ * returning an enlarged tuple to caller when truncation + TOAST compression
+ * ends up enlarging the final datum.

I didn't get the point of this paragraph. Does it might happen that
first right tuple is under tuple size restriction, but new pivot tuple
is beyond that restriction? If so, would we have an error because of
too long pivot tuple? If not, I think this needs to be explained
better.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Adrien Mobile 2019-01-04 15:57:03 Re: Log a sample of transactions
Previous Message Peter Eisentraut 2019-01-04 15:39:06 Re: Shared Memory: How to use SYSV rather than MMAP ?