From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Greg Stark <stark(at)mit(dot)edu>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Odd out of memory problem. |
Date: | 2012-03-26 19:06:27 |
Message-ID: | 15592.1332788787@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> On 03/26/2012 01:34 PM, Tom Lane wrote:
>> Hm. The test case is just a straight pg_restore of lots and lots of LOs?
>> What pg_dump version was the dump made with?
> 8.4.8, same as the target. We get the same issue whether we restore
> direct to the database from pg_restore or via a text dump.
I believe I see the issue: when creating/loading LOs, we first do a
lo_create (which in 8.4 makes a "page zero" tuple in pg_largeobject
containing zero bytes of data) and then lo_write, which will do a
heap_update to overwrite that tuple with data. This is at the next
command in the same transaction, so the original tuple has to receive a
combo CID. Net result: we accumulate one new combo CID per large object
loaded in the same transaction. You can reproduce this without any
pg_dump involvement at all, using something like
create table mylos (id oid);
insert into mylos select lo_import('/tmp/junk') from generate_series(1,1000000);
The problem is gone in 9.0 and up because now we use a
pg_largeobject_metadata entry instead of a pg_largeobject row to flag
the existence of an empty large object. I don't see any very practical
backend fix for the problem in 8.x.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Stefan Kaltenbrunner | 2012-03-26 19:06:29 | Re: Re: [COMMITTERS] pgsql: Replace empty locale name with implied value in CREATE DATABASE |
Previous Message | Peter Eisentraut | 2012-03-26 18:26:54 | pgsql: Remove dead assignment |