Re: [WIP] Effective storage of duplicates in B-tree index.

From: Thom Brown <thom(at)linux(dot)com>
To: Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [WIP] Effective storage of duplicates in B-tree index.
Date: 2016-01-29 17:43:02
Message-ID: CAA-aLv419NAeqTaBOOhPg2EGsCDWasa7Nz+K-LK4dB59s0p9PA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 29 January 2016 at 16:50, Anastasia Lubennikova
<a(dot)lubennikova(at)postgrespro(dot)ru> wrote:
> 29.01.2016 19:01, Thom Brown:
>>
>> On 29 January 2016 at 15:47, Aleksander Alekseev
>> <a(dot)alekseev(at)postgrespro(dot)ru> wrote:
>>>
>>> I tested this patch on x64 and ARM servers for a few hours today. The
>>> only problem I could find is that INSERT works considerably slower after
>>> applying a patch. Beside that everything looks fine - no crashes, tests
>>> pass, memory doesn't seem to leak, etc.
>
> Thank you for testing. I rechecked that, and insertions are really very very
> very slow. It seems like a bug.
>
>>>> Okay, now for some badness. I've restored a database containing 2
>>>> tables, one 318MB, another 24kB. The 318MB table contains 5 million
>>>> rows with a sequential id column. I get a problem if I try to delete
>>>> many rows from it:
>>>> # delete from contacts where id % 3 != 0 ;
>>>> WARNING: out of shared memory
>>>> WARNING: out of shared memory
>>>> WARNING: out of shared memory
>>>
>>> I didn't manage to reproduce this. Thom, could you describe exact steps
>>> to reproduce this issue please?
>>
>> Sure, I used my pg_rep_test tool to create a primary (pg_rep_test
>> -r0), which creates an instance with a custom config, which is as
>> follows:
>>
>> shared_buffers = 8MB
>> max_connections = 7
>> wal_level = 'hot_standby'
>> cluster_name = 'primary'
>> max_wal_senders = 3
>> wal_keep_segments = 6
>>
>> Then create a pgbench data set (I didn't originally use pgbench, but
>> you can get the same results with it):
>>
>> createdb -p 5530 pgbench
>> pgbench -p 5530 -i -s 100 pgbench
>>
>> And delete some stuff:
>>
>> thom(at)swift:~/Development/test$ psql -p 5530 pgbench
>> Timing is on.
>> psql (9.6devel)
>> Type "help" for help.
>>
>>
>> ➤ psql://thom(at)[local]:5530/pgbench
>>
>> # DELETE FROM pgbench_accounts WHERE aid % 3 != 0;
>> WARNING: out of shared memory
>> WARNING: out of shared memory
>> WARNING: out of shared memory
>> WARNING: out of shared memory
>> WARNING: out of shared memory
>> WARNING: out of shared memory
>> WARNING: out of shared memory
>> ...
>> WARNING: out of shared memory
>> WARNING: out of shared memory
>> DELETE 6666667
>> Time: 22218.804 ms
>>
>> There were 358 lines of that warning message. I don't get these
>> messages without the patch.
>>
>> Thom
>
>
> Thank you for this report.
> I tried to reproduce it, but I couldn't. Debug will be much easier now.
>
> I hope I'll fix these issueswithin the next few days.
>
> BTW, I found a dummy mistake, the previous patch contains some unrelated
> changes. I fixed it in the new version (attached).

Thanks. Well I've tested this latest patch, and the warnings are no
longer generated. However, the index sizes show that the patch
doesn't seem to be doing its job, so I'm wondering if you removed too
much from it.

Thom

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2016-01-29 17:46:47 Re: Using quicksort for every external sort run
Previous Message Robert Haas 2016-01-29 17:24:21 Re: Using quicksort for every external sort run