Re: B-tree cache prefetches

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Andrey Borodin <x4mmm(at)yandex-team(dot)ru>
Cc: pgsql-hackers mailing list <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: B-tree cache prefetches
Date: 2018-08-30 18:04:09
Message-ID: CAH2-Wzm8YAsmzywAZjyifrHGtETq0nvJxP9zOsi3_KOwj0-+Yw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Aug 30, 2018 at 10:53 AM, Andrey Borodin <x4mmm(at)yandex-team(dot)ru> wrote:
> The idea is pretty simple - our search are cache erasing anyway, let's try to get at least some of it by prefetching possible ways of binary search.
> And it seems to me that on a simple query
>> insert into x select (random()*1000000)::int from generate_series(1,1e7);
> it brings something like 2-4% of performance improvement on my laptop.
>
> Is there a reason why we do not use __builtin_prefetch? Have anyone tried to use cache prefetching?

I once wrote a patch that used __builtin_prefetch() when fetching
tuples from a tuplesort. It worked reasonably well on my laptop, but
didn't seem to do much on another machine with another
microarchitecture (presumably the server with the alternative
microarchitecture had superior hardware prefetching). The conclusion
was that it wasn't really worth pursuing.

I'm not dismissing your idea. I'm just pointing out that the burden of
proving that explicit prefetching is a good idea is rather
significant. We especially want to avoid something that needs to be
reassessed every couple of years.

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2018-08-30 18:04:11 Re: Dimension limit in contrib/cube (dump/restore hazard?)
Previous Message Michael Paquier 2018-08-30 17:56:52 Re: pg_verify_checksums and -fno-strict-aliasing