Skip site navigation (1) Skip section navigation (2)

Re: [HACKERS] How To free resources used by large object Relations?

From: "Maurice Gittens" <mgittens(at)gits(dot)nl>
To: "Vadim B(dot) Mikheev" <vadim(at)sable(dot)krasnoyarsk(dot)su>
Cc: <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: [HACKERS] How To free resources used by large object Relations?
Date: 1998-02-22 11:51:55
Message-ID: 002d01bd3f88$462872c0$ (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
-----Original Message-----
From: Vadim B. Mikheev <vadim(at)sable(dot)krasnoyarsk(dot)su>
To: Maurice Gittens <mgittens(at)gits(dot)nl>
Cc: pgsql-hackers(at)postgreSQL(dot)org <pgsql-hackers(at)postgreSQL(dot)org>
Date: zondag 22 februari 1998 17:47
Subject: Re: [HACKERS] How To free resources used by large object Relations?

>> Somehow I have to free the relation from the cache in the following
>> situations:
>> 1. In a transaction I must free the stuff when the transaction is
>> commited/aborted.
>Backend does it, don't worry.
I don't really understand all of the code so please bear with me.
Could it be that large objects don't use the right  memorycontext/portals so
that memory isn't freed automagically?
>> 2. Otherwise it must happen when lo_close is called.
>It seems that you can't remove relation from cache untill
>commit/abort, currently: backend uses local cache to unlink
>files of relations created in transaction if abort...
>We could change relcache.c:RelationPurgeLocalRelation()
>to read from pg_class directly...
Is there a way to to tell the cache manager to free resources?
The relations concerned are know how to properly free them is not
>But how many LO do you create in single xact ?
Only one (in my real application).
>Is memory allocated for cache so big ?
Not really except that the leak accumulates as long as the connection
with the backend is not closed.

I have a simple test program which goes like this:

(this is C-like psuedo code)

    connection = createConnection();



This program will leak memory each time it goes through the for loop.
It doesn't matter if the statements in the for loop are in a transaction or

When I give each large object it's own memory context (so that memory
is freed per large object) it seems to leak memory more slowly, but it leaks

I've tried calling a number of the functions (like
in relcache.c to try to free up the memory myself but the backend doesn't
this (== closed connection).

It looks like there is some assumption about which memorycontext/portal is
used during transactions and that largeobjects don't obey this

Can you make these assumptions explicite? Maybe I can then let
large object respect these rules.

Now I have the following understanding of these matters:
1. In transactions
All memory should be freed automatically at commit/abort.
How do I tell the system to do it for me?

2. In autocommit mode
All resources used by large object should be freed at lo_close.
Can I have this delayed and done automatically in the CommitTransaction

3. Somehow atomic functions like lo_create should not leak memory either.
This is the case however.

Thanks for any help,


pgsql-hackers by date

Next:From: Vadim B. MikheevDate: 1998-02-22 12:18:23
Subject: Re: [HACKERS] Open 6.3 issues
Previous:From: Vadim B. MikheevDate: 1998-02-22 11:26:45
Subject: Re: AW: [HACKERS] triggers, views and rules (not instead)

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group