Re: Hash Indexes

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Jesper Pedersen <jesper(dot)pedersen(at)redhat(dot)com>, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Hash Indexes
Date: 2016-12-12 21:21:44
Message-ID: CA+TgmoZ-4yKNjVodh1ST7v0_zn2=6njChfKHuT-jYBeckAxLrg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Dec 11, 2016 at 1:24 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> The reason for this and the similar error in vacuum was that in one of
> the corner cases after freeing the overflow page and updating the link
> for the previous bucket, we were not marking the buffer as dirty. So,
> due to concurrent activity, the buffer containing the updated links
> got evicted and then later when we tried to access the same buffer, it
> brought back the old copy which contains a link to freed overflow
> page.
>
> Apart from above issue, Kuntal has noticed that there is assertion
> failure (Assert(bucket == new_bucket);) in hashbucketcleanup with the
> same test as provided by you. The reason for that problem was that
> after deleting tuples in hashbucketcleanup, we were not marking the
> buffer as dirty due to which the old copy of the overflow page was
> re-appearing and causing that failure.
>
> After fixing the above problem, it has been noticed that there is
> another assertion failure (Assert(bucket == obucket);) in
> _hash_splitbucket_guts. The reason for this problem was that after
> the split, vacuum failed to remove tuples from the old bucket that are
> moved due to split. Now, during next split from the same old bucket,
> we don't expect old bucket to contain tuples from the previous split.
> To fix this whenever vacuum needs to perform split cleanup, it should
> update the metapage values (masks required to calculate bucket
> number), so that it shouldn't miss cleaning the tuples.
> I believe this is the same assertion what Andreas has reported in
> another thread [1].
>
> The next problem we encountered is that after running the same test
> for somewhat longer, inserts were failing with error "unexpected zero
> page at block ..". After some analysis, I have found that the lock
> chain (lock next overflow bucket page before releasing the previous
> bucket page) was broken in one corner case in _hash_freeovflpage due
> to which insert went ahead than squeeze bucket operation and accessed
> the freed overflow page before the link for the same has been updated.
>
> With above fixes, the test ran successfully for more than a day.

Instead of doing this:

+ _hash_chgbufaccess(rel, bucket_buf, HASH_WRITE, HASH_NOLOCK);
+ _hash_chgbufaccess(rel, bucket_buf, HASH_NOLOCK, HASH_WRITE);

...wouldn't it be better to just do MarkBufferDirty()? There's no
real reason to release the lock only to reacquire it again, is there?
I don't think we should be afraid to call MarkBufferDirty() instead of
going through these (fairly stupid) hasham-specific APIs.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2016-12-12 21:22:25 Re: tuplesort_gettuple_common() and *should_free argument
Previous Message Andrew Dunstan 2016-12-12 21:02:59 Re: jacana hung after failing to acquire random number