Re: [BUGS] BUG #1523: precision column value returned from getTypeInfo()

From: Sergio Lob <Sergio_Lob(at)iwaysoftware(dot)com>
To: oliver(at)opencloud(dot)com
Cc: pgsql-jdbc(at)postgresql(dot)org, Sergio Lob <sergio_lob(at)iwaysoftware(dot)com>, Ephraim Spravtsev <Ephraim_Spravtsev(at)iwaysoftware(dot)com>
Subject: Re: [BUGS] BUG #1523: precision column value returned from getTypeInfo()
Date: 2005-03-07 15:42:49
Message-ID: 422C7679.7050906@iwaysoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs pgsql-jdbc

You made some good points about what to return for precision when we are
describing data types and not a specific column's attributes. Let me
describe what ODBC defines. There is an ODBC api call that is roughly
equivalent to JDBC method getTypeInfo, named SQLGetTypeInfo(). This
function returns an answer set of information about data types supported
by the data source. One of the columns returned is "precision". The
definition of the precision column is as follows: "The maximum precision
of the data type on the data source. NULL is returned for the data types
where precision is not applicable".

My take then, is that for something like a CHAR data types, if the
maximum supported precision is 32767, then that is the value that should
be returned. Same for data types like VARCHAR and DATE/DATETIME...

Here are the answers to your questions:

Q1: Is this from the ODBC spec or something specific to Microsoft? (I'm
not familiar with ODBC at all).

Answer: The ODBC standard itself was developed Microsoft Corp, so there
is no nothing specific to Microsoft, as they wrote the standard.

Q2: We can't do this in getTypeInfo() as it's describing all VARCHARs,
not a specific one. What should we return in this case?

Answer: I can interpret this question in two ways. You are either
talking about multiple native data types that map to JDBC VARCHAR type
OR you are talking about 2 columns defined as varchar with different
length definitions. For interpretation #1, the answer set should have
one row for each supported native data type. If more than one native
data type maps to JDBC varchar type, they should each have their own row
with their own maximum precision column value. For interpretation #2, we
are describing data type info, not description of a specific column, so
this case does not apply.

Q3: This is going to be about 1GB if I read it correctly (but that
varies depending on what data you put in there). Is that really a useful
value to return?

Answer: In this case, I think NULL would be appropriate.

Q4: Again, we can't do this in getTypeInfo() as we're describing all NUMERICs, not a particular one.

Answer: see Q2 answer.

Q5: Gah, those look pretty hairy, especially since JDBC has accessors
specifically for date/time/timestamp -- you're not really meant to deal
with them as text..

Answer: This precision information of TIME/DATETIME data types is
precisely what I need for my application to work properly. The key to
calculating the max precision is how many fractional digits are
supported. For instance, if timestamp data type supports down to
microseconds, max precision is 26. If timestamp data type does not
support fractional seconds at all, max precision is 19. The precision
value includes separators in the character count. (eg. "yyyy-mm-dd
hh:mm:ss" has precision 19) . Not hard to figure out....

Q6: I'll take a look at the length issue too. What was the actual type
of the column that was returning bad values?

DatabaseMetadata.getColumns() returns an answer set with most (if not
all) of the VARCHAR columns described as having length of -4. For
instance, the first four columns of the answer set (columns "table_cat",
"table_schem", "table_name", and "column_name") return column length -4.
As an example, the output says that the actual name of the table (in
column 3) has a length of -4 bytes. So my application will attempt to
copy -4 bytes from the buffer containing the actual table name instead
of copying the correct length.

Q7: What else have you had problems with? In general the metadata code
isn't heavily used, and is poorly specified in JDBC anyway, so it hasn't
had as much work done on it as the rest of the driver. The only way it's
going to get better is if those people actually using the metadata point
out the problems in detail :)

I have not yet found more metadata issues that specifically affect my
generic JDBC application. I had another metadata-related issue, but it
was fixed in build 310 (support for PreparedStatement.getMetaData() method).
We do extensive QA testing of our JDBC interface (to many different
DBMSs), and they have identified several problems with Postgres (not
necessarily metadata-related) which I am currently investigating......

Regards, Sergio Lob

>Sergio Lob wrote:
>
>
>
>> The Microsoft ODBC 2.0 SDK guide and reference (Appendix D) contains
>>a pretty thorough definition of what precision means in relation to
>>various data types.
>>
>>
>
>Is this from the ODBC spec or something specific to Microsoft? (I'm not
>familiar with ODBC at all).
>
>
>
>>SQL_CHAR The defined length of
>>the column or parameter. For
>>SQL_VARCHAR example, the precision of a
>>column defined as CHAR(10)
>>
>>
>
>We can't do this in getTypeInfo() as it's describing all VARCHARs, not a
>specific one. What should we return in this case?
>
>
>
>>SQL_LONGVARCHAR The maximum length of the column
>>or parameter
>>
>>
>
>This is going to be about 1GB if I read it correctly (but that varies
>depending on what data you put in there). Is that really a useful value
>to return?
>
>
>
>>SQL_DECIMAL The defined number of
>>digits. For example, the
>>SQL_NUMERIC precision of a column
>>defined as NUMERIC(10,3) is 10
>>
>>
>
>Again, we can't do this in getTypeInfo() as we're describing all
>NUMERICs, not a particular one.
>
>
>
>>SQL_BINARY The defined length of
>>the column or parameter. For example,
>>SQL_VARBINARY the precision of a column
>>defined as BINARY(10) is 10
>>
>>SQL_LONGVARBINBARY The maximum length of the column or
>>parameter
>>
>>
>
>Same as for varchar/longvarchar/etc above.
>
>
>
>>SQL_DATE 10 (the number of
>>characters in yyyy-mm-dd format)
>>
>>SQL_TIME 8 (the number of
>>characters in hh:mm:ss format)
>>
>>SQL_TIMESTAMP The number of
>>characters in the
>>
>>"yyyy-mm-dd hh:mm:ss[.f...]" format used by the [...]
>>
>>
>
>Gah, those look pretty hairy, especially since JDBC has accessors
>specifically for date/time/timestamp -- you're not really meant to deal
>with them as text..
>
>For the other types (int/long/etc) I will take a look at returning a
>better precision value. We already do this in ResultSetMetadata, as Kris
>pointed out, so it shouldn't be too painful.
>
>
>
>>I have found problems with the answer set of other JDBC methods which
>>return metadata information as well. For instance, the description of
>>length values of certain columns in the DatabaseMetadata.getColumns()
>>method answer set return negative numbers (-4 ?).
>>This data being crucial to applications that I have written, I have
>>unforunately had to take PostgresSQL off the list of JDBC data sources
>>that I can support through my application.
>>
>>
>
>I'll take a look at the length issue too. What was the actual type of
>the column that was returning bad values?
>
>What else have you had problems with? In general the metadata code isn't
>heavily used, and is poorly specified in JDBC anyway, so it hasn't had
>as much work done on it as the rest of the driver. The only way it's
>going to get better is if those people actually using the metadata point
>out the problems in detail :)
>
>-O
>
>

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Theo Petersen 2005-03-07 16:11:49 Re: BUG #1527: select retrieves 0 rows after vacuum analyz
Previous Message Hans-Jürgen Schönig 2005-03-07 10:44:59 Re: BUG #1519: server closed the connection unexpectedly

Browse pgsql-jdbc by date

  From Date Subject
Next Message Tom Lane 2005-03-07 16:32:47 Re: ident auth postgres 7.4 fedora core 3
Previous Message Ian Pilcher 2005-03-07 15:33:03 Re: ident auth postgres 7.4 fedora core 3