Re: Optimizing numeric SUM() aggregate

From: Andrew Borodin <borodin(at)octonica(dot)com>
To: Andrew Borodin <amborodin(at)acm(dot)org>
Cc: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Optimizing numeric SUM() aggregate
Date: 2016-07-27 06:33:03
Message-ID: CAJEAwVE-sDR_ZuQ8uKY2HW8L3TSsuEoO7qoKwcjyeR50BXfvEw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

>I think we could do carry every 0x7FFFFFF / 10000 accumulation, couldn't we?

I feel that I have to elaborate a bit. Probably my calculations are wrong.

Lets assume we already have accumulated INT_MAX of 9999 digits in
previous-place accumulator. That's almost overflow, but that's not
overflow. Carring that accumulator to currents gives us INT_MAX/10000
carried sum.
So in current-place accumulator we can accumulate: ( INT_MAX - INT_MAX
/ 10000 ) / 9999, where 9999 is max value dropped in current-place
accumulator on each addition.
That is INT_MAX * 9999 / 99990000 or simply INT_MAX / 10000.

If we use unsigned 32-bit integer that is 429496. Which is 43 times
less frequent carring.

As a bonus, we get rid of 9999 const in the code (:

Please correct me if I'm wrong.

Best regards, Andrey Borodin, Octonica & Ural Federal University.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Etsuro Fujita 2016-07-27 08:50:32 Re: Oddity in EXPLAIN for foreign/custom join pushdown plans
Previous Message Michael Paquier 2016-07-27 06:24:58 Re: pg_dumping extensions having sequences with 9.6beta3