Re: hash_search and out of memory

From: Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: hash_search and out of memory
Date: 2012-10-19 03:31:21
Message-ID: CAP7QgmmtEMPDDka=TViHe-Ej5KZUAoJJrJ2W23Zi=3=cAJ_WpA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Oct 18, 2012 at 8:35 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I wrote:
>> Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com> writes:
>>> If OOM happens during expand_table() in hash_search_with_hash_value()
>>> for RelationCacheInsert,
>
> the palloc-based allocator does throw
> errors. I think that when that was designed, we were thinking that
> palloc-based hash tables would be thrown away anyway after an error,
> but of course that's not true for long-lived tables such as the relcache
> hash table.
>
> I'm not terribly comfortable with trying to use a PG_TRY block to catch
> an OOM error - there are too many ways that could break, and this code
> path is by definition not very testable. I think moving up the
> expand_table action is probably the best bet. Will you submit a patch?

Here it is. I factored out the bucket finding code to re-calculate it
after expansion.

Thanks,
--
Hitoshi Harada

Attachment Content-Type Size
hashoom.patch application/octet-stream 5.2 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit kapila 2012-10-19 06:46:42 Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]
Previous Message Tom Lane 2012-10-19 02:41:22 Re: [PATCH] Support for Array ELEMENT Foreign Keys