Re: Write Ahead Logging for Hash Indexes

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Ashutosh Sharma <ashu(dot)coek88(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Write Ahead Logging for Hash Indexes
Date: 2016-09-09 03:26:27
Message-ID: CAA4eK1KTh02qLVAytqUusM9LbO=cMyEfDMhqMSEfOWwWpg7LkA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Sep 9, 2016 at 12:39 AM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
>
> I plan to do testing using my own testing harness after changing it to
> insert a lot of dummy tuples (ones with negative values in the pseudo-pk
> column, which are never queried by the core part of the harness) and
> deleting them at random intervals. I think that none of pgbench's built in
> tests are likely to give the bucket splitting and squeezing code very much
> exercise.
>

Hash index tests [1] written by Mithun does cover part of that code
and we have done testing by extending those tests to cover splitting
and squeezing part of code.

> Is there a way to gather statistics on how many of each type of WAL record
> are actually getting sent over the replication link? The only way I can
> think of is to turn on wal archving as well as replication, then using
> pg_xlogdump to gather the stats.
>

Sounds sensible, but what do you want to know by getting the number of
each type of WAL records? I understand it is important to cover all
the WAL records for hash index (and I think Ashutosh has done that
during his tests [2]), but may be sending multiple times same record
could further strengthen the validation.

> I've run my original test for a while now and have not seen any problems.
> But I realized I forgot to compile with enable-casserts, to I will have to
> redo it to make sure the assertion failures have been fixed. In my original
> testing I did very rarely get a deadlock (or some kind of hang), and I
> haven't seen that again so far. It was probably the same source as the one
> Mark observed, and so the same fix.
>

Thanks for the verification.

[1] - https://commitfest.postgresql.org/10/716/
[2] - https://www.postgresql.org/message-id/CAE9k0PkPumi4iWFuD%2BjHHkpcxn531%3DDJ8uH0dctsvF%2BdaZY6yQ%40mail.gmail.com
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavan Deolasee 2016-09-09 03:33:56 Re: Vacuum: allow usage of more than 1GB of work mem
Previous Message Andrey Borodin 2016-09-09 03:18:29 Re: Re: GiST optimizing memmoves in gistplacetopage for fixed-size updates [PoC]