Re: SPI bug.

From: Thomas Hallgren <thhal(at)mailblocks(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Neil Conway <neilc(at)samurai(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: SPI bug.
Date: 2005-05-02 15:37:04
Message-ID: thhal-0o+FRAwk/yicy9vp4OGTMr5iXa1neg6@mailblocks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:

>Thomas Hallgren <thhal(at)mailblocks(dot)com> writes:
>
>
>>Exactly. Why should a user of the SPI API be exposed to or even
>>concerned with this at all? As an application programmer you couldn't
>>care less. You want your app to perform equally well on all platforms
>>without surprises. IMHO, PostgreSQL should make a decision whether the
>>SPI functions support 32-bit or the 64-bit sizes for result sets and the
>>API should reflect that choice. Having the maximum number of rows
>>dependent on platform ports is a bad design.
>>
>>
>
>The fact that 64-bit platforms can tackle bigger problems than 32-bit
>ones is not a bug to be worked around, and so I don't see any problem
>with the use of "long" for tuple counts.
>
I'm not concerned with the use of 32 or 64 bits. I would be equally
happy with both. What I am concerned is that the problem that started
this "SPI bug" was caused by the differences in how platforms handle the
int and long types. Instead of rectifying this problem once and for all,
the type was just changed to a long.

> Furthermore, we have never
>promised ABI-level compatibility across versions inside the backend,
>and we are quite unlikely to make such a promise in the foreseeable
>future.
>
I know that no promises has been made but PostgreSQL is improved every
day and this would be a very easy promise to make.

> (Most of the time you are lucky if you get source-level
>compatibility ;-).) So I can't get excited about avoiding platform
>dependency in this particular tiny aspect of the API.
>
>
Maybe I've misunderstood the objectives behind the SPI layer altogether
but since it's well documented and seems to be the "public interface" of
the backend that extensions are supposed to use, I think it would be an
excellent idea to make that interface as stable and platform independent
as possible. I can't really see the disadvantages.

The use of int, long, and long long is often a source of bugs (as with
this one) and many recommend that you avoid them when possible. The
definition of int is meant to be a datatype used for storing integers
where size of that datatype equals natural size of processor. The long
is defined as 'at least as big as int' and the 'long long' is 'bigger
than long'. I wonder what that makes 'long long' on a platform where the
int is 64 bits. 128 bits? Also, the interpretation of the definition
vary between compiler vendors. On Windows Itanium, the int is 32 bit. On
Unix it's 64. It's a mess...

The 1998 revision of C declares the following types for a good reason:

int8_t , int16_t, int32_t int64_t,
uint8_t, uint16_t, uint32_t, uint64_t.

Why not use them unless you have very specific requirements? And why not
*always* use them in a public interface like the SPI?

Regards,
Thomas Hallgren

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2005-05-02 15:37:57 Re: pg_locks needs a facelift
Previous Message Tom Lane 2005-05-02 15:07:39 Re: Feature freeze date for 8.1