Re: Some memory allocations in gin fastupdate code are a bit brain dead

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Some memory allocations in gin fastupdate code are a bit brain dead
Date: 2018-12-18 15:24:45
Message-ID: 10273.1545146685@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> writes:
> I recently stumbled upon the following code in ginfast.c:
> while (collector->ntuples + nentries > collector->lentuples)
> {
> collector->lentuples *= 2;
> collector->tuples = (IndexTuple *) repalloc(collector->tuples,
> sizeof(IndexTuple) * collector->lentuples);
> }

I agree that's pretty stupid, but I wonder whether we should also try
harder in the initial-allocation path. Right now, if the initial
pass through this code sees say 3 tuples to insert, it will then work
with 3 * 2^n entries in subsequent enlargements, which is likely to
be pretty inefficient. Should we try to force the initial allocation
to be a power of 2?

Also, do we need to worry about integer overflow?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message John Naylor 2018-12-18 15:25:54 Re: reducing the footprint of ScanKeyword (was Re: Large writable variables)
Previous Message Pavel Stehule 2018-12-18 15:19:41 Re: dropdb --force