Re: hash_search and out of memory

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: hash_search and out of memory
Date: 2012-10-19 18:40:18
Message-ID: 2906.1350672018@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com> writes:
> On Thu, Oct 18, 2012 at 8:35 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I'm not terribly comfortable with trying to use a PG_TRY block to catch
>> an OOM error - there are too many ways that could break, and this code
>> path is by definition not very testable. I think moving up the
>> expand_table action is probably the best bet. Will you submit a patch?

> Here it is. I factored out the bucket finding code to re-calculate it
> after expansion.

I didn't like that too much. I think a better solution is just to do
the table expansion at the very start of the function, along the lines
of the attached patch. This requires an extra test of the "action"
parameter, but I think that's probably cheaper than an extra function
call. It's definitely cheaper than recalculating the hash etc when
a split does occur.

regards, tom lane

Attachment Content-Type Size
hashoom-2.patch text/x-patch 3.2 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2012-10-19 18:45:16 Re: assertion failure w/extended query protocol
Previous Message Satoshi Nagayasu 2012-10-19 18:39:23 Re: pg_stat_lwlocks view - lwlocks statistics, round 2