Re: BUG #14722: Segfault in tuplesort_heap_siftup, 32 bit overflow

From: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
To: Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Sergey Koposov <skoposov(at)cmu(dot)edu>, "pg(at)bowt(dot)ie" <pg(at)bowt(dot)ie>, "pgsql-bugs(at)postgresql(dot)org" <pgsql-bugs(at)postgresql(dot)org>
Subject: Re: BUG #14722: Segfault in tuplesort_heap_siftup, 32 bit overflow
Date: 2017-07-12 13:15:10
Message-ID: 3de75af4-a49f-36d1-347f-f170693ce6f5@iki.fi
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On 07/06/2017 01:14 AM, Andres Freund wrote:
> On 2017-07-05 18:03:56 -0400, Tom Lane wrote:
>> I don't like s/int/int64/g as a fix for this. That loop is probably
>> a hot spot, and this fix is going to be expensive on any machine where
>> int64 isn't the native word width. How about something like this instead:
>>
>> - int j = 2 * i + 1;
>> + int j;
>>
>> + if (unlikely(i > INT_MAX / 2))
>> + break; /* if j would overflow, we're done */
>> + j = 2 * i + 1;
>> if (j >= n)
>> break;
>
> Isn't an added conditional likely going to be more costly than the
> s/32/64/ bit calculations on the majority of machines pg runs on? I'm
> quite doubtful that it's worth catering for the few cases where that's
> really slow.

Another option to use "unsigned int", on the assumption that UINT_MAX >=
INT_MAX * 2 + 1. And to eliminate that assumption, we can use (UINT_MAX
- 1) / 2 as the maximum size of the memtuples array, rather than INT_MAX.

- Heikki

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Tom Lane 2017-07-12 15:30:59 Re: BUG #14654: With high statistics targets on ts_vector, unexpectedly high memory use & OOM are triggered
Previous Message Heikki Linnakangas 2017-07-12 12:45:48 Re: BUG #14721: Assertion of synchronous replication