BUG #15518: intarray index crashes hard

From: PG Bug reporting form <noreply(at)postgresql(dot)org>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Cc: andrew(at)tao11(dot)riddles(dot)org(dot)uk
Subject: BUG #15518: intarray index crashes hard
Date: 2018-11-22 22:28:53
Message-ID: 15518-799e426c3b4f8358@postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

The following bug has been logged on the website:

Bug reference: 15518
Logged by: Andrew Gierth
Email address: andrew(at)tao11(dot)riddles(dot)org(dot)uk
PostgreSQL version: 11.1
Operating system: any
Description:

Based on a report from IRC:

create extension intarray;
create table ibreak (id integer, a integer[]);
create index on ibreak using gist (a);
insert into ibreak
select i, array(select hashint4(i*j) from generate_series(1,100) j)
from generate_series(1,20) i;
-- segfault

This happens because the default "small" intarray opclass, gist__int_ops,
has wholly inadequate sanity checks on the data; while it will reject
individual rows with too many distinct values, it will happily construct
compressed non-leaf keys that will crash the decompression code due to
overflowing an "int", or produce an unhelpful memory allocation error, or
consume vast amounts of CPU time without checking for interrupts.

This isn't new; it looks like this issue has existed as long as intarray
has.

Obviously it's not intended that gist__int_ops should actually work with
data of this kind - that's what gist__intbig_ops is for. But it's not
reasonable for it to crash rather than returning an error.

I'm working on a patch.

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Haribabu Kommi 2018-11-22 23:23:37 Re: BUG #15514: process fails on jsonb_populate_recordset query. see simple example below
Previous Message Thomas Munro 2018-11-22 21:50:03 Re: Fail to create PK or index for large table in Windows