> What would be really useful would be "best case" and "worst case"
I've put together some data from a microbenchmark of the bcTrulen
function, patched and unpatched.
As for best-case, if you have a long string of trailing spaces, we can
go through them at theoretically one quarter of cost (a test benchmark
on x86 shows an actual reduction of 11 to 3 sec with a string of 100
Worst-case behaviour is with smaller numbers of spaces. Here are the
transition points (ie, where doing the word-wise comparison is faster
than byte-wise) that I see from my benchmark:
- where 'align' is the alignment of the first byte to compare (ie, at
the end of the string). This is pretty much as-expected, as these
transition points are the first opportunity that the new function has to
do a word compare.
In the worst cases, I see a 53% cost increase on x86 (with the string
'aaa ') and a 97% increase on PowerPC ('a ').
So, it all depends on the number of padding spaces we'd expect to see on
workload data. Fortunately, we see the larger reductions on the more
expensive operations (ie, longer strings).
In response to
pgsql-hackers by date
|Next:||From: Itagaki Takahiro||Date: 2009-06-26 02:41:37|
|Subject: query cancel issues in contrib/dblink|
|Previous:||From: Tom Lane||Date: 2009-06-26 00:50:02|
|Subject: Re: 8.4 RC1 union/nested select cast bug? |