|From:||Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>|
|To:||Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>|
|Cc:||PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: extend pgbench expressions with functions|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
> My description should have been obscure. Indeed the call tree is
> finite for *sane* expression node. But it makes infinit call for
> a value of expr->etype unknown by both evalDouble and
Such issue would be detected if the function is actually tested, hopefully
this should be the case... :-)
However I agree that relying implicitely on the "default" case is not very
good practice, so I updated the code in the attached v11 to fail
explicitely on such errors.
I also attached a small test script, which exercises most (all?)
./pgbench -f functions.sql -t 1
> By the way, the complexity comes from separating integer and
> double. If there is no serios reason to separate them, handling
> all values as double makes things far simpler.
Yep, but no.
> Could you let me know the reason why it strictly separates integer and
> double? I don't see no problem in possible errors of floating point
> calculations for this purpose. Is there any?
Indeed it would make things simpler, but it would break large integers as
the int64 -> double -> int64 casts would result in approximations. The
integer type is the important one here because it is used for primary
keys, and you do not want a key to be approximated in any way, so the
int64 type must be fully and exactly supported.
|Next Message||Shulgin, Oleksandr||2015-09-18 08:59:54||Re: On-demand running query plans using auto_explain and signals|
|Previous Message||Vladimir Borodin||2015-09-18 08:08:11||Re: RFC: replace pg_stat_activity.waiting with something more descriptive|