Re: hash_search and out of memory

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: hash_search and out of memory
Date: 2012-10-18 15:35:09
Message-ID: 27721.1350574509@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I wrote:
> Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com> writes:
>> If OOM happens during expand_table() in hash_search_with_hash_value()
>> for RelationCacheInsert,

> What OOM? expand_table is supposed to return without doing anything
> if it can't expand the table. If that's not happening, that's a bug
> in the hash code.

Oh, wait, I take that back --- the palloc-based allocator does throw
errors. I think that when that was designed, we were thinking that
palloc-based hash tables would be thrown away anyway after an error,
but of course that's not true for long-lived tables such as the relcache
hash table.

I'm not terribly comfortable with trying to use a PG_TRY block to catch
an OOM error - there are too many ways that could break, and this code
path is by definition not very testable. I think moving up the
expand_table action is probably the best bet. Will you submit a patch?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2012-10-18 15:41:31 Re: Global Sequences
Previous Message Alvaro Herrera 2012-10-18 15:29:52 Re: Review for pg_dump: Sort overloaded functions in deterministic order