> Hi all,
> the exact NUMERIC datatype materializes more and more. What I
> got so far are the four arithmetic base operators, all six
> comparision operators and these functions:
> The trigonometric ones I left out for now, but since SQRT(),
> EXP() and LN() work, it wouldn't be that hard to do them
> later (the former produce the same results as bc(1) does -
> shortly tested up to 400 digits after the decimal point).
> The speed of the complex functions is IMHO acceptable. For
> example EXP() is 25% better than bc(1) on small numbers but
> needs up to 3 times on very big ones (999). Who ever needs
> EXP(999) or more? The result is about 7.2e433! SQRT() is a
> bit slow - so be it. Postgres shouldn't become a substitute
> for arbitrary precision calculators.
> So I think it's time now to move the stuff into the backend.
> Therefor I first need a bunch of OID's (about 30 C functions,
> 20 SQL functions and 10 operators for now). Should I fill up
> all the holes?
No, I don't recommend it. I recommend getting a range of oid's. Oids
are confusing enough, without trying to collect them scattered all over
a range of values.
The problem is that we don't have a range of 30 left anymore, and the
needs of future development are surely going to eat up the rest. One
idea is to remove some of the conversion functions we have defined that
are rarely used. We don't NEED them with Thomas's conversion stuff.
Thomas says native conversion is faster, but if we find some that we
almost never use, we could rip them out and use those.
However, my recommendation is that we have to start thinking about
increasing the maximum allowable system oid.
1 - 10
100 - 101
1288 - 1295
1597 - 1599
1608 - 1610
1619 - 1639
The max system oid is stored in transam.h as:
* note: we reserve the first 16384 object ids for internal use.
* oid's less than this appear in the .bki files. the choice of
* 16384 is completely arbitrary.
#define BootstrapObjectIdData 16384
This is 2^14.
We can increase this to 32k without any noticable problem, as far as I
know. That will give us a nice range of availble oids, and allow
renumbering if people want to clean up some of current oid mess.
The only problem is that loading a pg_dump -o is going to probably cause
rows with oids in the 16k-32k range to duplicate those in the system
tables. Is that a problem? I am not sure. As far as I know, there is
no reason oid's have to be unique, especially if they are in different
tables. contrib/findoidjoins will get messed up by this, but I am not
sure if that is a serious problem.
I can't figure out another way around it. We could expand by going very
high, near 2 billion, assuming no one is up there yet, but the code will
get very messy doing that, and I don't recommend it.
I think going to 2^15 is going to become necessary someday. The
question is, do we do it for 6.5, and if so, how will the duplicate oids
affect our users?
A new system would start assigning rows with oids > 2^15, so only system
table oids installed via pg_dump -o or COPY WITH OIDS FROM would be
in the range 2^14-2^15.
pg_dump uses the max system oid to determine of a function is a user
function, but because it gets the max oid from the template1 table oid,
this should be portable across the two oid systems.
Bruce Momjian | http://www.op.net/~candle
maillist(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
In response to
pgsql-hackers by date
|Next:||From: Thomas G. Lockhart||Date: 1998-12-30 06:23:45|
|Subject: Re: [HACKERS] NUMERIC needs OID's|
|Previous:||From: Jan Wieck||Date: 1998-12-30 02:24:14|
|Subject: NUMERIC needs OID's|