|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|Subject:||Removing "long int"-related limit on hash table sizes|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Per the discussion at , users on Windows are seeing nasty performance
losses in v13/v14 (compared to prior releases) for hash aggregations that
required somewhat more than 2GB in the prior releases. That's because
they spill to disk where they did not before. The easy answer of "raise
hash_mem_multiplier" doesn't help, because on Windows the product of
work_mem and hash_mem_multiplier is clamped to 2GB, thanks to the ancient
decision to do a lot of memory-space-related calculations in "long int",
which is only 32 bits on Win64.
While I don't personally have the interest to fix that altogether,
it does seem like we've got a performance regression that we ought
to do something about immediately. So I took a look at getting rid of
this restriction for calculations associated with hash_mem_multiplier,
and it doesn't seem to be too bad. I propose the attached patch.
(This is against HEAD; there are minor conflicts in v13 and v14.)
A couple of notes:
* I did not change most of the comments referring to "hash_mem",
even though that's not really a thing anymore. They seem readable
enough anyway, and I failed to think of a reasonably-short substitute.
* We should drop get_hash_mem() altogether in HEAD and maybe v14.
I figure we'd better leave it available in v13, though, in case
any outside code is using it.
regards, tom lane
|Next Message||Tom Lane||2021-07-23 21:18:37||Re: Configure with thread sanitizer fails the thread test|
|Previous Message||Mark Dilger||2021-07-23 21:04:10||Re: Delegating superuser tasks to new security roles (Was: Granting control of SUSET gucs to non-superusers)|