|From:||"Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>|
|To:||Heikki Linnakangas <hlinnaka(at)iki(dot)fi>|
|Cc:||Pavel Borisov <pashkin(dot)elfe(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>, Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>, Darafei Komяpa Praliaskouski <me(at)komzpa(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: Yet another fast GiST build|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
> 7 сент. 2020 г., в 19:10, Heikki Linnakangas <hlinnaka(at)iki(dot)fi> написал(а):
> On 07/09/2020 13:59, Pavel Borisov wrote:
>>>>> I suppose there is a big jump in integer value (whether signed or
>>>>> unsigned) as you cross from positive to negative floats, and then the
>>>>> sort order is reversed. I have no idea if either of those things is a
>>>>> problem worth fixing. That made me wonder if there might also be an
>>> I took a stab at fixing this, see attached patch (applies on top of your
>>> patch v14).
>>> To evaluate this, I used the other attached patch to expose the zorder
>>> function to SQL, and plotted points around zero with gnuplot. See the
>>> attached two images, one with patch v14, and the other one with this patch.
>> I'd made testing of sorted SpGist build in cases of points distributed only
>> in 2d quadrant and points in all 4 quadrants and it appears that this
>> abnormality doesn't affect as much as Andrey supposed. But Heikki's patch
>> is really nice way to avoid what can be avoided and I'd like it is included
>> together with Andrey's patch.
> Thanks! Did you measure the quality of the built index somehow? The
> ordering shouldn't make any difference to the build speed, but it
> affects the shape of the resulting index and the speed of queries
> against it.
I've tried to benchmark the difference between build time v14 and v15. v15 seems to be slightly slower, but with negligible difference.
> I played with some simple queries like this:
> explain (analyze, buffers) select count(*) from points_good where p <@
> box(point(50, 50), point(75, 75));
To observe IndexScan difference query should touch 4 quadrants. i.e. search within ((-25,-25),point(25,25))
> and looking at the "Buffers" line for how many pages were accessed.
> There doesn't seem to be any consistent difference between v14 and my
> fix. So I concur it doesn't seem to matter much.
> I played some more with plotting the curve. I wrote a little python
> program to make an animation of it, and also simulated how the points
> would be divided into pages, assuming that each GiST page can hold 200
> tuples (I think the real number is around 150 with default page size).
> In the animation, the leaf pages appear as rectangles as it walks
> through the Z-order curve. This is just a simulation by splitting all
> the points into batches of 200 and drawing a bounding box around each
> batch. I haven't checked the actual pages as the GiST creates, but I
> think this gives a good idea of how it works.
> The animation shows that there's quite a lot of overlap between the
> pages. It's not necessarily this patch's business to try to improve
> that, and the non-sorting index build isn't perfect either. But it
> occurs to me that there's maybe one pretty simple trick we could do:
> instead of blindly filling the leaf pages in Z-order, collect tuples
> into a larger buffer, in Z-order. I'm thinking 32 pages worth of tuples,
> or something in that ballpark, or maybe go all the way up to work_mem.
> When the buffer fills up, call the picksplit code to divide the buffer
> into the actual pages, and flush them to disk. If you look at the
> animation and imagine that you would take a handful of pages in the
> order they're created, and re-divide the points with the split
> algorithm, there would be much less overlap.
Animation looks cool! It really pins the inefficiency of resulting MBRs.
But in R*-tree one of Beckman's points was that overlap optimisation worth doing on higher levels, not lower.
But we can do this for splits on each level, I think. We do not know tree depth in advance to divide maintenance workmem among level.. But, probably we don't need to, let's allocate half to first level, quarter to second, 1/8 to third etc until it's one page. Should we take allocations inside picksplit() into account?
The more I think about it the cooler idea seem to me.
BTW I've found one more bug in the patch: it writes WAL even for unlogged tables. I'm not sending a patch because changes are trivial and currently we already have lengthy patchset in different messages.
Also, to avoid critical section we can use log_new_page() instead of log_buffer().
Best regards, Andrey Borodin.
|Next Message||John Naylor||2020-09-07 15:43:00||Re: factorial function/phase out postfix operators?|
|Previous Message||Tom Lane||2020-09-07 15:20:21||Re: Issue with cancel_before_shmem_exit while searching to remove a particular registered exit callbacks|