On Friday, November 18, 2011 10:11:29 PM Tom Lane wrote:
> Andres Freund <andres(at)anarazel(dot)de> writes:
> > In many scenarios memory allocation is one of the top 3 functions showing
> > up in profiles. Looking at hierarchical profiles
> > (-fno-omit-frame-pointer) at least during parsing, planning and executor
> > startup most of that is spent around the list API.
> > Many - especially in the parse-analyze phase - of those allocations can
> > be avoided because the lists are immutable and their size is known
> > upfront.
> The fundamental problem with all of those proposals is that now you have
> some lists in the system that aren't like other lists, and will result
> in dumping core if the wrong sorts of operations are applied to them.
> I don't particularly care for introducing that kind of fragility into
> the system in return for marginal speed gains. I'm not impressed by
> Asserts showing that no such thing happens in the cases you tested;
> the test coverage won't be complete, and even if it is, innocent-looking
> code changes later on could create new problems.
Yes. I dislike that myself (as noted). It seems rather fragile although I
think at least during parsing we could simply generally refrain from parsing
I don't think that the gains are marginal though. After covering only a small
number of cases there are not uncommon/artificial cases gaining more than 20%
One prime example of workloads benefiting hugely is something like SELECT *
FROM pg_class WHERE oid = ...;
Essentially all queries which request few rows with a large number of columns
benefit rather measurably.
> Now, if you could do something that *doesn't* restrict what operations
> could be applied to the lists, that would be good.
If every list cell/header would grow another field "allocate_only" (which I
currently added for cassert only) those could just get skipped when deleting.
Some places are directly freeing list headers but that seems to be a bad idea
The disadvantage is that those would essentially be there till context reset
unless done via some special api. Also not that nice... On the other hand only
very few callsites free list(-cells) during parsing anyway.
Without looking I didn't see measurable increase in memory usage due to the
new field with that approach.
The only way out of that seems to be to add refcounted list cells/headers :(.
I fear that would be rather complex, expensive and failure prone so I don't
like to go there.
> I've wished for a long while that we could allocate the list header and
> the first list cell in a single palloc cycle.
Yea. Although that is only a rather small portion of the problems/allocations.
> This would basically
> require getting list_delete_cell to refrain from pfree'ing a cell that
> got allocated that way, which is easy as long as you have the list
> header at hand, but what happens if the list is later concat'd to
> another? A subsequent delete operation would be referring to the other
> list header and would come to the wrong conclusion.
I don't think any such scheme is safe.
> While thinking about this just now, it occurred to me that maybe the
> issues could be dodged if the cell, not the header, were first in the
> combined palloc block. list_concat is then no longer a problem, as long
> as it doesn't try to throw away the second list's header. But I haven't
> thought long enough to be sure how well that would work.
I don't think that would work without carefully revising list usage all
around... Several places remove nodes from a list and then do list_free() on
For my POC memory allocator I added "intrusive" lists which have the next,
prev elements embedded in the stored element. I wonder if some of the list
usage could be replaced by such a scheme. Obviously for every embeded
list_node a Node can only be in one list...
In response to
pgsql-hackers by date
|Next:||From: Andres Freund||Date: 2011-11-18 21:45:31|
|Subject: Re: [PATCH] Replace a long chain of if's in eval_const_expressions_mutator by a switch()|
|Previous:||From: Peter Geoghegan||Date: 2011-11-18 21:38:18|
|Subject: Re: Inlining comparators as a performance optimisation|