Re: [HACKERS] Everything leaks; How it mm suppose to work?

From: dg(at)illustra(dot)com (David Gould)
To: lockhart(at)alumni(dot)caltech(dot)edu (Thomas G(dot) Lockhart)
Cc: mgittens(at)gits(dot)nl, dz(at)cs(dot)unitn(dot)it, hackers(at)postgresql(dot)org
Subject: Re: [HACKERS] Everything leaks; How it mm suppose to work?
Date: 1998-04-09 18:34:00
Message-ID: 9804091834.AA04612@hawk.illustra.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Thomas G. Lockhart replies to Maurice:
> > >Does it make sense to have a 'row' context which is released just
> > >before starting with a new tuple ? The total number or free is the
> > >same but they are distributed over the query and unused memory should
> > >not accumulate.
> > >I have seen backends growing to 40-60MB with queries which scan a
> > >very large number of rows.
> > I think this would be appropiate.
>
> It seems that the CPU overhead on all queries would increase trying to
> deallocate/reuse memory during the query. There are lots of places in
> the backend where memory is palloc'd and then left lying around after
> use; I had assumed it was sort-of-intentional to avoid having extra
> cleanup overhead during a query.

This is exactly right. Destroying a memory context in the current
implementationis a very high overhead operation. Doing it once per row
would be a performance disaster.

-dg

David Gould dg(at)illustra(dot)com 510.628.3783 or 510.305.9468
Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612
- Linux. Not because it is free. Because it is better.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message David Hartwig 1998-04-09 22:54:24 New pg_type for large object
Previous Message Jose' Soares Da Silva 1998-04-09 17:11:25 error on HAVING clause