Re: A better way than tweaking NTUP_PER_BUCKET

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: A better way than tweaking NTUP_PER_BUCKET
Date: 2013-06-22 18:58:50
Message-ID: CA+Tgmoa5Z4Rv6rrm_tPQsUUTSB-5CKoy=JHf_d46gos=S6vZ=A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Jun 22, 2013 at 9:48 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>> The correct calculation that would match the objective set out in the
>> comment would be
>>
>> dbuckets = (hash_table_bytes / tupsize) / NTUP_PER_BUCKET;
>
> This looks to be driving the size of the hash table size off of "how
> many of this size tuple can I fit into memory?" and ignoring how many
> actual rows we have to hash. Consider a work_mem of 1GB with a small
> number of rows to actually hash- say 50. With a tupsize of 8 bytes,
> we'd be creating a hash table sized for some 13M buckets.

This is a fair point, but I still think Simon's got a good point, too.
Letting the number of buckets ramp up when there's ample memory seems
like a broadly sensible strategy. We might need to put a floor on the
effective load factor, though.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2013-06-22 19:03:39 Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Previous Message Robert Haas 2013-06-22 18:54:53 Re: A better way than tweaking NTUP_PER_BUCKET