Re: Composite Datums containing toasted fields are a bad idea(?)

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Composite Datums containing toasted fields are a bad idea(?)
Date: 2014-04-20 19:38:23
Message-ID: 6110.1398022703@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I wrote:
> The main problem with this patch, as I see it, is that it'll introduce
> extra syscache lookup overhead even when there are no toasted fields
> anywhere. I've not really tried to quantify how much, since that would
> require first agreeing on a benchmark case --- anybody have a thought
> about what that ought to be? But in at least some of these functions,
> we'll be paying a cache lookup or two per array element, which doesn't
> sound promising.

I created a couple of test cases that I think are close to worst-case for
accumArrayResult() and array_set(), which are the two functions that seem
most worth worrying about. The accumArrayResult test case is

create table manycomplex as
select row(random(),random())::complex as c
from generate_series(1,1000000);
vacuum analyze manycomplex;

then time:

select array_dims(array_agg(c)) from manycomplex;

On HEAD, this takes about 295-300 msec on my machine in a non-cassert
build. With the patch I sent previously, the time goes to 495-500 msec.
Ouch.

The other test case is

create or replace function timearrayset(n int) returns void language plpgsql as
$$
declare a complex[];
begin
a := array[row(1,2), row(3,4), row(5,6)];
for i in 1..$1 loop
a[2] := row(5,6)::complex;
end loop;
end;
$$;

select timearrayset(1000000);

This goes from circa 1250 ms in HEAD to around 1680 ms.

Some poking around with oprofile says that much of the added time is
coming from added syscache and typcache lookups, as I suspected.

I did some further work on the patch to make it possible to cache
the element tuple descriptor across calls for these two functions.
With the attached patch, the first test case comes down to about 335 ms
and the second to about 1475 ms (plpgsql is still leaving some extra
cache lookups on the table). More could be done with some further API
changes, but I think this is about as much as is safe to back-patch.

At least in the accumArrayResult test case, it would be possible to
bring the runtime back to very close to HEAD if we were willing to
abandon the principle that compressed fields within a tuple should
always get decompressed when the tuple goes into a larger structure.
That would allow flatten_composite_element to quit early if the
tuple doesn't have the HEAP_HASEXTERNAL infomask bit set. I'm not
sure that this would be a good tradeoff however: if we fail to remove nested
compression, we could end up paying for that with double compression.
On the other hand, it's unclear that that case would come up often,
while the overhead of disassembling the tuple is definitely going to
happen every time we assign to an array-of-composites element. (Also,
there is no question here of regression relative to previous releases,
since the existing code fails to prevent nested compression.)

Comments?

regards, tom lane

Attachment Content-Type Size
detoast-composite-array-elements-2.patch text/x-diff 44.5 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2014-04-21 03:00:09 Re: DISCARD ALL (Again)
Previous Message Воронин Дмитрий 2014-04-20 18:02:25 New functions for sslinfo extension