Re: [WIP] Effective storage of duplicates in B-tree index.

From: Peter Geoghegan <pg(at)heroku(dot)com>
To: Thom Brown <thom(at)linux(dot)com>
Cc: Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [WIP] Effective storage of duplicates in B-tree index.
Date: 2016-01-28 17:09:36
Message-ID: CAM3SWZSdoPrWy_CZgMSub_f7beZpHmVCD_-TO_TaZ1Kepubpuw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jan 28, 2016 at 9:03 AM, Thom Brown <thom(at)linux(dot)com> wrote:
> I'm surprised that efficiencies can't be realised beyond this point. Your results show a sweet spot at around 1000 / 10000000, with it getting slightly worse beyond that. I kind of expected a lot of efficiency where all the values are the same, but perhaps that's due to my lack of understanding regarding the way they're being stored.

I think that you'd need an I/O bound workload to see significant
benefits. That seems unsurprising. I believe that random I/O from
index writes is a big problem for us.

--
Peter Geoghegan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Thom Brown 2016-01-28 17:14:10 Re: [WIP] Effective storage of duplicates in B-tree index.
Previous Message Thom Brown 2016-01-28 17:03:02 Re: [WIP] Effective storage of duplicates in B-tree index.