Re: Better tracking of free space during SP-GiST index build

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgreSQL(dot)org
Cc: emre(at)hasegeli(dot)com
Subject: Re: Better tracking of free space during SP-GiST index build
Date: 2016-09-21 21:46:35
Message-ID: b4dd6327-a372-21d4-0caf-5b99e7ce0de5@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 08/25/2016 03:26 AM, Tomas Vondra wrote:
>
>
> On 08/25/2016 01:45 AM, Tom Lane wrote:
>> Over in the thread about the SP-GiST inet opclass, I threatened to
>> post a patch like this, and here it is.
>>
>> The basic idea is to track more than just the very latest page
>> we've used in each of the page categories that SP-GiST works with.
>> I started with an arrangement that gave an equal number of cache
>> slots to each category, but soon realized that that was dumb,
>> because there are usually way more leaf pages than anything else.
>> So this version has a little table of how many slots to give to
>> each category. The constants could maybe use a bit more fiddling,
>> if we have some more test data sets to try this on.
>>
>> On the IRRExplorer data set we discussed in the other thread, this
>> reduces the index size from 132MB to 120MB. Poking into that more
>> closely with pg_filedump, the total free space within the index
>> drops from 42MB to 28MB. If you think those numbers don't add up,
>> you're right --- this seems to result in more non-leaf tuples than
>> before. I'm not sure why; maybe more aggressive sucking up of free
>> space results in more splits. (Maybe adjustment of the default
>> spgist fillfactor would be in order to counteract that?) But the
>> index search time doesn't seem to be hurt, so perhaps there's
>> nothing to worry about.
>>
>> As coded, this makes no attempt to preferentially select pages with
>> the most or least free space. I don't know if it'd be worth any
>> cycles to do that.
>>
>> I'll put this in the commitfest queue. It could use review from
>> someone with the time and motivation to do performance
>> testing/tuning.
>>
>

I've been toying with this patch a bit today, particularly looking at
(1) sizing of the cache, and (2) how beneficial it'd be to choose pages
from the cache in a smarter way.

(1) sizing of the cache

I wonder why the patch only sets the cache size to 100 items, when we
might fit many more entries into the ~8kB limit. Sure, it's going to be
more expensive to maintain the cache, but if it results in smaller
index, it might be worth it. I've tried increasing the cache size to 768
entries, with vast majority of them (~600) allocated to leaf pages.

Sadly, this seems to only increase the CREATE INDEX duration a bit,
without making the index significantly smaller (still ~120MB).

(2) page selection

I do believe this is due to the current simple selection of pages from
the cache - we simply select the pages more or less randomly (as long as
the page has enough free space). My understanding is that this leads to
pages having roughly the same amount of free space. Subsequently, when
SpGistSetLastUsedPage() selects page to evict from the cache, it finds
with roughly the same amount of free space, and even if it picks the
most full one, it wastes quite a bit of space.

I do think selecting the page with the least free space would save
additional space. Instead of making SpGistGetBuffer() more complicated,
I've simply shoved a pg_qsort() calls on a few places, sorting the cache
by free space, with unused slots at the end. Clearly, this is quite
expensive and proper patch would have to do that differently (e.g.
maintaining the order incrementally), but OTOH it'd allow some
optimizations in SpGistGetBuffer() - the first page with enough free
space would have the smallest amount of free space (more or less).

This actually helped a bit, and the index size dropped by ~2MB. So not
bad, but nowhere close to the initial 132MB -> 120MB improvement.

The following table shows comparison of index sizes, and also the effect
of fillfactor=100:

master: 132MB
master + fillfactor=100: 124MB

patch: 120MB
patch + fillfactor=100: 109MB

patch + 768 items + selection: 117MB
patch + 768 items + selection + fillfactor=100: 103MB

It's worth mentioning the spgist fillfactor is a bit crude, most likely
thanks to splits - e.g. the 109MB index still contains ~10MB of free
space on the pages (measures using pageinspect as upper-lower), so
almost 10%. Perhaps it really is time to increase the spgist default
fillfactor?

(3) random comments

It seems the patch keeps new/empty/deleted pages in the cache, and thus
segregated by type. Is that intentional, or should
SpGistSetLastUsedPage() remove such pages from the cache? Or maybe move
them into a special category? It's true we'll reuse those pages, as
allocNewBuffer() allocates the buffer directly, but those pages are
unlikely to get evicted from the cache due to high freeSpace value
(despite possibly already reused).

Similarly for completely full pages (with freeSpace==0) - does it make
sense to keep them in the cache? Although it's probably harmless, as
those pages will get evicted first if needed.

Overall, I think the patch is good - it may be possible to improve it in
the future, but it's a solid improvement.

One thing I'd change is making the SpGistLUPCache dynamic, i.e. storing
the size and lastUsedPagesMap on the meta page. That should allow us
resizing the cache and tweak lastUsedPagesMap in the future.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Thomas Munro 2016-09-21 21:49:28 Re: Tracking wait event for latches
Previous Message Geoff Winkless 2016-09-21 19:44:15 Re: Hash Indexes