Add defenses against integer overflow in dynahash numbuckets calculations.
The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc. I (tgl) had previously installed some defenses against that
in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one
call path. Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway. So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.
Trouble report and patch by Jeff Davis.
src/backend/executor/nodeHash.c | 4 ++-
src/backend/utils/hash/dynahash.c | 49 ++++++++++++++++++++++++++++--------
2 files changed, 41 insertions(+), 12 deletions(-)
pgsql-committers by date
|Next:||From: Heikki Linnakangas||Date: 2012-12-12 11:55:33|
|Subject: pgsql: In multi-insert,don't go into infinite loop on a huge tuple and|
|Previous:||From: Tom Lane||Date: 2012-12-12 00:28:40|
|Subject: pgsql: Disable event triggers in standalone mode.|