Re: [WIP] Effective storage of duplicates in B-tree index.

From: Aleksander Alekseev <a(dot)alekseev(at)postgrespro(dot)ru>
To: Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>
Cc: pgsql-hackers(at)postgresql(dot)org, Thom Brown <thom(at)linux(dot)com>
Subject: Re: [WIP] Effective storage of duplicates in B-tree index.
Date: 2016-01-29 15:47:33
Message-ID: 20160129184733.2ca9026a@fujitsu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I tested this patch on x64 and ARM servers for a few hours today. The
only problem I could find is that INSERT works considerably slower after
applying a patch. Beside that everything looks fine - no crashes, tests
pass, memory doesn't seem to leak, etc.

> Okay, now for some badness. I've restored a database containing 2
> tables, one 318MB, another 24kB. The 318MB table contains 5 million
> rows with a sequential id column. I get a problem if I try to delete
> many rows from it:
> # delete from contacts where id % 3 != 0 ;
> WARNING: out of shared memory
> WARNING: out of shared memory
> WARNING: out of shared memory

I didn't manage to reproduce this. Thom, could you describe exact steps
to reproduce this issue please?

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Artur Zakirov 2016-01-29 15:58:39 Re: Fuzzy substring searching with the pg_trgm extension
Previous Message Alvaro Herrera 2016-01-29 15:39:51 Re: Fuzzy substring searching with the pg_trgm extension