Skip site navigation (1) Skip section navigation (2)

Re: Patch to add bytea support to JDBC

From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: Barry Lind <barry(at)xythos(dot)com>
Cc: pgsql-patches(at)postgresql(dot)org, pgsql-jdbc(at)postgresql(dot)org
Subject: Re: Patch to add bytea support to JDBC
Date: 2001-09-10 14:23:51
Message-ID: 200109101423.f8AENqx16948@candle.pha.pa.us (view raw or flat)
Thread:
Lists: pgsql-jdbcpgsql-patches
Your patch has been added to the PostgreSQL unapplied patches list at:

	http://candle.pha.pa.us/cgi-bin/pgpatches

I will try to apply it within the next 48 hours.

> Attached is a patch to add bytea support to JDBC.
> 
> 
> This patch does the following:
> 
> - Adds binary datatype support (bytea)
> - Changes getXXXStream()/setXXXStream() methods to be spec compliant
> - Adds ability to revert to old behavior
> 
> Details:
> 
> Adds support for the binary type bytea.  The ResultSet.getBytes() and 
> PreparedStatement.setBytes() methods now work against columns of bytea 
> type.  This is a change in behavior from the previous code which assumed 
> the column type was OID and thus a LargeObject.  The new behavior is 
> more complient with the JDBC spec as BLOB/CLOB are to be used for 
> LargeObjects and the getBytes()/setBytes() methods are for the databases 
> binary datatype (which is bytea in postgres).
> 
> Changes the behavior of the getBinaryStream(), getAsciiStream(), 
> getCharacterStream(), getUnicodeStream() and their setXXXStream() 
> counterparts.  These methos now work against either the bytea type 
> (BinaryStream) or the text types (AsciiStream, CharacterStream, 
> UnicodeStream).  The previous behavior was that these all assumed the 
> underlying column was of type OID and thus a LargeObject.  The 
> spec/javadoc for these methods indicate that they are for LONGVARCHAR 
> and LONGVARBINARY datatypes, which are distinct from the BLOB/CLOB 
> datatypes.  Given that the bytea and text types support upto 1G, they 
> are the LONGVARBINARY and LONGVARCHAR datatypes in postgres.
> 
> Added support for turning off the above new functionality.  Given that 
> the changes above are not backwardly compatible (however they are more 
> spec complient), I added the ability to revert back to the old behavior. 
>   The Connection now takes an optional parameter named 'compatible'.  If 
> the value of '7.1' is passed, the driver reverts to the 7.1 behavior. 
> If the parameter is not passed or the value '7.2' is passed the behavior 
> is the new behavior.  The mechanism put in place can be used in the 
> future when/if similar needs arise to change behavior.  This is 
> patterned after how Oracle does this (i.e. Oracle has a 'compatible' 
> parameter that behaves in a similar manner).
> 
> Misc fixes.  Cleaned up a few things I encountered along the way.
> 
> 
> Note that in testing the patch I needed to ignore whitespace differences 
> in order to get it to apply cleanly (i.e. patch -l -i byteapatch.diff).
> 
> Also this patch introduces a new file 
> (src/interfaces/jdbc/org/postgresql/util/PGbytea.java).
> 
> 
> thanks,
> --Barry
> 
> PS.  In case this patch gets caught in the email list approvals because 
> it is greater than 40K in size, I have CCed Bruce so that he knows this 
> is coming before he does the beta builds.

> *** ./src/interfaces/jdbc/org/postgresql/Connection.java.orig	Sat Sep  8 23:26:04 2001
> --- ./src/interfaces/jdbc/org/postgresql/Connection.java	Sat Sep  8 23:21:48 2001
> ***************
> *** 22,36 ****
>     // This is the network stream associated with this connection
>     public PG_Stream pg_stream;
>   
> -   // This is set by org.postgresql.Statement.setMaxRows()
> -   //public int maxrows = 0;		// maximum no. of rows; 0 = unlimited
> - 
>     private String PG_HOST;
>     private int PG_PORT;
>     private String PG_USER;
>     private String PG_PASSWORD;
>     private String PG_DATABASE;
>     private boolean PG_STATUS;
>   
>     /**
>      *  The encoding to use for this connection.
> --- 22,34 ----
>     // This is the network stream associated with this connection
>     public PG_Stream pg_stream;
>   
>     private String PG_HOST;
>     private int PG_PORT;
>     private String PG_USER;
>     private String PG_PASSWORD;
>     private String PG_DATABASE;
>     private boolean PG_STATUS;
> +   private String compatible;
>   
>     /**
>      *  The encoding to use for this connection.
> ***************
> *** 123,128 ****
> --- 121,131 ----
>       PG_PORT = port;
>       PG_HOST = host;
>       PG_STATUS = CONNECTION_BAD;
> +     if(info.getProperty("compatible")==null) {
> +       compatible = d.getMajorVersion() + "." + d.getMinorVersion();
> +     } else {
> +       compatible = info.getProperty("compatible");
> +     }
>   
>       // Now make the initial connection
>       try
> ***************
> *** 966,971 ****
> --- 969,991 ----
>     public boolean haveMinimumServerVersion(String ver) throws SQLException
>     {
>         return (getDBVersionNumber().compareTo(ver) >= 0);
> +   }
> + 
> +   /**
> +    * This method returns true if the compatible level set in the connection
> +    * (which can be passed into the connection or specified in the URL)
> +    * is at least the value passed to this method.  This is used to toggle
> +    * between different functionality as it changes across different releases
> +    * of the jdbc driver code.  The values here are versions of the jdbc client
> +    * and not server versions.  For example in 7.1 get/setBytes worked on
> +    * LargeObject values, in 7.2 these methods were changed to work on bytea
> +    * values.  This change in functionality could be disabled by setting the
> +    * "compatible" level to be 7.1, in which case the driver will revert to
> +    * the 7.1 functionality.
> +    */
> +   public boolean haveMinimumCompatibleVersion(String ver) throws SQLException
> +   {
> +       return (compatible.compareTo(ver) >= 0);
>     }
>   
>   
> *** ./src/interfaces/jdbc/org/postgresql/Driver.java.in.orig	Sat Sep  8 23:09:58 2001
> --- ./src/interfaces/jdbc/org/postgresql/Driver.java.in	Sat Sep  8 23:08:04 2001
> ***************
> *** 85,96 ****
>      * database.
>      *
>      * <p>The java.util.Properties argument can be used to pass arbitrary
> !    * string tag/value pairs as connection arguments.  Normally, at least
>      * "user" and "password" properties should be included in the
> !    * properties.  In addition, the "charSet" property can be used to
> !    * set a character set encoding (e.g. "utf-8") other than the platform
> !    * default (typically Latin1).  This is necessary in particular if storing
> !    * multibyte characters in the database.  For a list of supported
>      * character encoding , see
>      * http://java.sun.com/products/jdk/1.2/docs/guide/internat/encoding.doc.html
>      * Note that you will probably want to have set up the Postgres database
> --- 85,110 ----
>      * database.
>      *
>      * <p>The java.util.Properties argument can be used to pass arbitrary
> !    * string tag/value pairs as connection arguments.  
> !    *
> !    * user - (optional) The user to connect as
> !    * password - (optional) The password for the user
> !    * charSet - (optional) The character set to be used for converting 
> !    *   to/from the database to unicode.  If multibyte is enabled on the 
> !    *   server then the character set of the database is used as the default,
> !    *   otherwise the jvm character encoding is used as the default.
> !    * compatible - This is used to toggle
> !    *   between different functionality as it changes across different releases
> !    *   of the jdbc driver code.  The values here are versions of the jdbc 
> !    *   client and not server versions.  For example in 7.1 get/setBytes 
> !    *   worked on LargeObject values, in 7.2 these methods were changed 
> !    *   to work on bytea values.  This change in functionality could 
> !    *   be disabled by setting the compatible level to be "7.1", in 
> !    *   which case the driver will revert to the 7.1 functionality.
> !    *
> !    * <p>Normally, at least
>      * "user" and "password" properties should be included in the
> !    * properties.  For a list of supported
>      * character encoding, see
>      * http://java.sun.com/products/jdk/1.2/docs/guide/internat/encoding.doc.html
>      * Note that you will probably want to have set up the Postgres database
> *** ./src/interfaces/jdbc/org/postgresql/jdbc1/Connection.java.orig	Sat Sep  8 23:55:32 2001
> --- ./src/interfaces/jdbc/org/postgresql/jdbc1/Connection.java	Sat Sep  8 23:57:32 2001
> ***************
> *** 174,179 ****
> --- 174,180 ----
>       "float8",
>       "bpchar","char","char2","char4","char8","char16",
>       "varchar","text","name","filename",
> +     "bytea",
>       "bool",
>       "date",
>       "time",
> ***************
> *** 197,202 ****
> --- 198,204 ----
>       Types.DOUBLE,
>       Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,
>       Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,
> +     Types.BINARY,
>       Types.BIT,
>       Types.DATE,
>       Types.TIME,
> *** ./src/interfaces/jdbc/org/postgresql/jdbc1/PreparedStatement.java.orig	Sun Sep  9 00:01:02 2001
> --- ./src/interfaces/jdbc/org/postgresql/jdbc1/PreparedStatement.java	Sat Sep  8 14:37:14 2001
> ***************
> *** 82,88 ****
>   	 * A Prepared SQL query is executed and its ResultSet is returned
>   	 *
>   	 * @return a ResultSet that contains the data produced by the
> ! 	 *	query - never null
>   	 * @exception SQLException if a database access error occurs
>   	 */
>   	public java.sql.ResultSet executeQuery() throws SQLException
> --- 82,88 ----
>            * A Prepared SQL query is executed and its ResultSet is returned
>            *
>            * @return a ResultSet that contains the data produced by the
> ! 	 *     	 *	query - never null
>            * @exception SQLException if a database access error occurs
>            */
>           public java.sql.ResultSet executeQuery() throws SQLException
> ***************
> *** 107,113 ****
>   	 * be executed.
>   	 *
>   	 * @return either the row count for INSERT, UPDATE or DELETE; or
> ! 	 * 	0 for SQL statements that return nothing.
>   	 * @exception SQLException if a database access error occurs
>   	 */
>   	public int executeUpdate() throws SQLException
> --- 107,113 ----
>            * be executed.
>            *
>            * @return either the row count for INSERT, UPDATE or DELETE; or
> ! 	 *     	 * 	0 for SQL statements that return nothing.
>            * @exception SQLException if a database access error occurs
>            */
>           public int executeUpdate() throws SQLException
> ***************
> *** 294,299 ****
> --- 294,308 ----
>      */
>     public void setBytes(int parameterIndex, byte x[]) throws SQLException
>     {
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports the bytea datatype for byte arrays
> +       if(null == x){
> +         setNull(parameterIndex,Types.OTHER);
> +       } else {
> +         setString(parameterIndex, PGbytea.toPGString(x));
> +       }
> +     } else {
> +       //Version 7.1 and earlier support done as LargeObjects
>         LargeObjectManager lom = connection.getLargeObjectAPI();
>         int oid = lom.create();
>         LargeObject lob = lom.open(oid);
> ***************
> *** 301,306 ****
> --- 310,316 ----
>         lob.close();
>         setInt(parameterIndex,oid);
>       }
> +   }
>   
>           /**
>            * Set a parameter to a java.sql.Date value.  The driver converts this
> ***************
> *** 386,393 ****
> --- 396,424 ----
>            */
>           public void setAsciiStream(int parameterIndex, InputStream x, int length) throws SQLException
>           {
> +           if (connection.haveMinimumCompatibleVersion("7.2")) {
> +             //Version 7.2 supports AsciiStream for all PG text types (char, varchar, text)
> +             //As the spec/javadoc for this method indicate this is to be used for
> +             //large String values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +             //long varchar datatype, but with toast all text datatypes are capable of
> +             //handling very large values.  Thus the implementation ends up calling
> +             //setString() since there is no current way to stream the value to the server
> +             try {
> +               InputStreamReader l_inStream = new InputStreamReader(x, "ASCII");
> +               char[] l_chars = new char[length];
> +               int l_charsRead = l_inStream.read(l_chars,0,length);
> +               setString(parameterIndex, new String(l_chars,0,l_charsRead));
> +             } catch (UnsupportedEncodingException l_uee) {
> +               throw new PSQLException("postgresql.unusual",l_uee);
> +             } catch (IOException l_ioe) {
> +               throw new PSQLException("postgresql.unusual",l_ioe);
> +             }
> +           } else {
> +             //Version 7.1 supported only LargeObjects by treating everything
> +             //as binary data
>               setBinaryStream(parameterIndex, x, length);
>             }
> +         }
>   
>           /**
>            * When a very large Unicode value is input to a LONGVARCHAR parameter,
> ***************
> *** 406,413 ****
> --- 437,465 ----
>            */
>           public void setUnicodeStream(int parameterIndex, InputStream x, int length) throws SQLException
>           {
> +           if (connection.haveMinimumCompatibleVersion("7.2")) {
> +             //Version 7.2 supports AsciiStream for all PG text types (char, varchar, text)
> +             //As the spec/javadoc for this method indicate this is to be used for
> +             //large String values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +             //long varchar datatype, but with toast all text datatypes are capable of
> +             //handling very large values.  Thus the implementation ends up calling
> +             //setString() since there is no current way to stream the value to the server
> +             try {
> +               InputStreamReader l_inStream = new InputStreamReader(x, "UTF-8");
> +               char[] l_chars = new char[length];
> +               int l_charsRead = l_inStream.read(l_chars,0,length);
> +               setString(parameterIndex, new String(l_chars,0,l_charsRead));
> +             } catch (UnsupportedEncodingException l_uee) {
> +               throw new PSQLException("postgresql.unusual",l_uee);
> +             } catch (IOException l_ioe) {
> +               throw new PSQLException("postgresql.unusual",l_ioe);
> +             }
> +           } else {
> +             //Version 7.1 supported only LargeObjects by treating everything
> +             //as binary data
>               setBinaryStream(parameterIndex, x, length);
>             }
> +         }
>   
>           /**
>            * When a very large binary value is input to a LONGVARBINARY parameter,
> ***************
> *** 425,431 ****
>   	 */
>   	public void setBinaryStream(int parameterIndex, InputStream x, int length) throws SQLException
>   	{
> ! 	    throw org.postgresql.Driver.notImplemented();
>   	}
>   
>   	/**
> --- 477,530 ----
>            */
>           public void setBinaryStream(int parameterIndex, InputStream x, int length) throws SQLException
>           {
> !           if (connection.haveMinimumCompatibleVersion("7.2")) {
> !             //Version 7.2 supports BinaryStream for for the PG bytea type
> !             //As the spec/javadoc for this method indicate this is to be used for
> !             //large binary values (i.e. LONGVARBINARY)  PG doesn't have a separate
> !             //long binary datatype, but with toast the bytea datatype is capable of
> !             //handling very large values.  Thus the implementation ends up calling
> !             //setBytes() since there is no current way to stream the value to the server
> !             byte[] l_bytes = new byte[length];
> !             int l_bytesRead;
> !             try {
> !               l_bytesRead = x.read(l_bytes,0,length);
> !             } catch (IOException l_ioe) {
> !               throw new PSQLException("postgresql.unusual",l_ioe);
> !             }
> !             if (l_bytesRead == length) {
> !               setBytes(parameterIndex, l_bytes);
> !             } else {
> !               //the stream contained less data than they said
> !               byte[] l_bytes2 = new byte[l_bytesRead];
> !               System.arraycopy(l_bytes,0,l_bytes2,0,l_bytesRead);
> !               setBytes(parameterIndex, l_bytes2);
> !             }
> !           } else {
> !             //Version 7.1 only supported streams for LargeObjects
> !             //but the jdbc spec indicates that streams should be
> !             //available for LONGVARBINARY instead
> !             LargeObjectManager lom = connection.getLargeObjectAPI();
> !             int oid = lom.create();
> !             LargeObject lob = lom.open(oid);
> !             OutputStream los = lob.getOutputStream();
> !             try {
> !               // could be buffered, but then the OutputStream returned by LargeObject
> !               // is buffered internally anyhow, so there would be no performance
> !               // boost gained, if anything it would be worse!
> !               int c=x.read();
> !               int p=0;
> !               while(c>-1 && p<length) {
> !                 los.write(c);
> !                 c=x.read();
> !                 p++;
> !               }
> !               los.close();
> !             } catch(IOException se) {
> !               throw new PSQLException("postgresql.unusual",se);
> !             }
> !             // lob is closed by the stream so don't call lob.close()
> !             setInt(parameterIndex,oid);
> !           }
>           }
>   
>           /**
> ***************
> *** 460,467 ****
>   	 * @param x the object containing the input parameter value
>   	 * @param targetSqlType The SQL type to be send to the database
>   	 * @param scale For java.sql.Types.DECIMAL or java.sql.Types.NUMERIC
> ! 	 *	types this is the number of digits after the decimal.  For 
> ! 	 *	all other types this value will be ignored.
>   	 * @exception SQLException if a database access error occurs
>   	 */
>   	public void setObject(int parameterIndex, Object x, int targetSqlType, int scale) throws SQLException
> --- 559,566 ----
>            * @param x the object containing the input parameter value
>            * @param targetSqlType The SQL type to be send to the database
>            * @param scale For java.sql.Types.DECIMAL or java.sql.Types.NUMERIC
> ! 	 *     	 *	types this is the number of digits after the decimal.  For
> ! 	 *     	 *	all other types this value will be ignored.
>            * @exception SQLException if a database access error occurs
>            */
>           public void setObject(int parameterIndex, Object x, int targetSqlType, int scale) throws SQLException
> ***************
> *** 572,578 ****
>   	 * statements handled by executeQuery and executeUpdate
>   	 *
>   	 * @return true if the next result is a ResultSet; false if it is an
> ! 	 *	update count or there are no more results
>   	 * @exception SQLException if a database access error occurs
>   	 */
>   	public boolean execute() throws SQLException
> --- 671,677 ----
>            * statements handled by executeQuery and executeUpdate
>            *
>            * @return true if the next result is a ResultSet; false if it is an
> ! 	 *     	 *	update count or there are no more results
>            * @exception SQLException if a database access error occurs
>            */
>           public boolean execute() throws SQLException
> *** ./src/interfaces/jdbc/org/postgresql/jdbc1/ResultSet.java.orig	Sat Sep  8 23:58:31 2001
> --- ./src/interfaces/jdbc/org/postgresql/jdbc1/ResultSet.java	Sat Sep  8 22:47:01 2001
> ***************
> *** 374,383 ****
>     {
>       if (columnIndex < 1 || columnIndex > fields.length)
>         throw new PSQLException("postgresql.res.colrange");
> -     wasNullFlag = (this_row[columnIndex - 1] == null);
>       
>       // Handle OID's as BLOBS
> !     if(!wasNullFlag)
>         if( fields[columnIndex - 1].getOID() == 26) {
>   	LargeObjectManager lom = connection.getLargeObjectAPI();
>   	LargeObject lob = lom.open(getInt(columnIndex));
> --- 374,388 ----
>     {
>       if (columnIndex < 1 || columnIndex > fields.length)
>         throw new PSQLException("postgresql.res.colrange");
>   
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports the bytea datatype for byte arrays
> +       return PGbytea.toBytes(getString(columnIndex));
> +     } else {
> +       //Version 7.1 and earlier supports LargeObjects for byte arrays
> +       wasNullFlag = (this_row[columnIndex - 1] == null);
>         // Handle OID's as BLOBS
> !       if(!wasNullFlag) {
>           if( fields[columnIndex - 1].getOID() == 26) {
>             LargeObjectManager lom = connection.getLargeObjectAPI();
>             LargeObject lob = lom.open(getInt(columnIndex));
> ***************
> *** 385,392 ****
>   	lob.close();
>   	return buf;
>         }
> !     
> !     return this_row[columnIndex - 1];
>     }
>     
>     /**
> --- 390,398 ----
>             lob.close();
>             return buf;
>           }
> !       }
> !     }
> !     return null;
>     }
>   
>     /**
> ***************
> *** 545,552 ****
> --- 551,577 ----
>      */
>     public InputStream getAsciiStream(int columnIndex) throws SQLException
>     {
> +     wasNullFlag = (this_row[columnIndex - 1] == null);
> +     if (wasNullFlag)
> +       return null;
> + 
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports AsciiStream for all the PG text types
> +       //As the spec/javadoc for this method indicate this is to be used for
> +       //large text values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +       //long string datatype, but with toast the text datatype is capable of
> +       //handling very large values.  Thus the implementation ends up calling
> +       //getString() since there is no current way to stream the value from the server
> +       try {
> +         return new ByteArrayInputStream(getString(columnIndex).getBytes("ASCII"));
> +       } catch (UnsupportedEncodingException l_uee) {
> +         throw new PSQLException("postgresql.unusual", l_uee);
> +       }
> +     } else {
> +       // In 7.1 Handle as BLOBS so return the LargeObject input stream
>         return getBinaryStream(columnIndex);
>       }
> +   }
>   
>     /**
>      * A column value can also be retrieved as a stream of Unicode
> ***************
> *** 562,569 ****
> --- 587,613 ----
>      */
>     public InputStream getUnicodeStream(int columnIndex) throws SQLException
>     {
> +     wasNullFlag = (this_row[columnIndex - 1] == null);
> +     if (wasNullFlag)
> +       return null;
> + 
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports AsciiStream for all the PG text types
> +       //As the spec/javadoc for this method indicate this is to be used for
> +       //large text values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +       //long string datatype, but with toast the text datatype is capable of
> +       //handling very large values.  Thus the implementation ends up calling
> +       //getString() since there is no current way to stream the value from the server
> +       try {
> +         return new ByteArrayInputStream(getString(columnIndex).getBytes("UTF-8"));
> +       } catch (UnsupportedEncodingException l_uee) {
> +         throw new PSQLException("postgresql.unusual", l_uee);
> +       }
> +     } else {
> +       // In 7.1 Handle as BLOBS so return the LargeObject input stream
>         return getBinaryStream(columnIndex);
>       }
> +   }
>   
>     /**
>      * A column value can also be retrieved as a binary strea.  This
> ***************
> *** 579,589 ****
>      */
>     public InputStream getBinaryStream(int columnIndex) throws SQLException
>     {
> !     byte b[] = getBytes(columnIndex);
>       
>       if (b != null)
>         return new ByteArrayInputStream(b);
> !     return null;		// SQL NULL
>     }
>     
>     /**
> --- 623,651 ----
>      */
>     public InputStream getBinaryStream(int columnIndex) throws SQLException
>     {
> !     wasNullFlag = (this_row[columnIndex - 1] == null);
> !     if (wasNullFlag)
> !       return null;
>   
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports BinaryStream for all PG bytea type
> +       //As the spec/javadoc for this method indicate this is to be used for
> +       //large binary values (i.e. LONGVARBINARY)  PG doesn't have a separate
> +       //long binary datatype, but with toast the bytea datatype is capable of
> +       //handling very large values.  Thus the implementation ends up calling
> +       //getBytes() since there is no current way to stream the value from the server
> +       byte b[] = getBytes(columnIndex);
>         if (b != null)
>           return new ByteArrayInputStream(b);
> !     } else {
> !       // In 7.1 Handle as BLOBS so return the LargeObject input stream
> !       if( fields[columnIndex - 1].getOID() == 26) {
> !         LargeObjectManager lom = connection.getLargeObjectAPI();
> !         LargeObject lob = lom.open(getInt(columnIndex));
> !         return lob.getInputStream();
> !       }
> !     }
> !     return null;
>     }
>   
>     /**
> *** ./src/interfaces/jdbc/org/postgresql/jdbc2/Connection.java.orig	Sat Sep  8 23:29:16 2001
> --- ./src/interfaces/jdbc/org/postgresql/jdbc2/Connection.java	Sat Sep  8 23:33:29 2001
> ***************
> *** 291,302 ****
>       "float8",
>       "bpchar","char","char2","char4","char8","char16",
>       "varchar","text","name","filename",
>       "bool",
>       "date",
>       "time",
>       "abstime","timestamp",
> !     "_bool", "_char", "_int2", "_int4", "_text", "_oid", "_varchar", "_int8",
> !     "_float4", "_float8", "_abstime", "_date", "_time", "_timestamp", "_numeric"
>     };
>   
>     /**
> --- 291,305 ----
>       "float8",
>       "bpchar","char","char2","char4","char8","char16",
>       "varchar","text","name","filename",
> +     "bytea",
>       "bool",
>       "date",
>       "time",
>       "abstime","timestamp",
> !     "_bool", "_char", "_int2", "_int4", "_text", 
> !     "_oid", "_varchar", "_int8", "_float4", "_float8", 
> !     "_abstime", "_date", "_time", "_timestamp", "_numeric", 
> !     "_bytea"
>     };
>   
>     /**
> ***************
> *** 316,327 ****
>       Types.DOUBLE,
>       Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,
>       Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,
>       Types.BIT,
>       Types.DATE,
>       Types.TIME,
>       Types.TIMESTAMP,Types.TIMESTAMP,
> !     Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY,
> !     Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY
>     };
>   
>   
> --- 319,333 ----
>       Types.DOUBLE,
>       Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,Types.CHAR,
>       Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,
> +     Types.BINARY,
>       Types.BIT,
>       Types.DATE,
>       Types.TIME,
>       Types.TIMESTAMP,Types.TIMESTAMP,
> !     Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, 
> !     Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, 
> !     Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY, Types.ARRAY,
> !     Types.ARRAY
>     };
>   
>   
> *** ./src/interfaces/jdbc/org/postgresql/jdbc2/PreparedStatement.java.orig	Sat Sep  8 23:35:02 2001
> --- ./src/interfaces/jdbc/org/postgresql/jdbc2/PreparedStatement.java	Sat Sep  8 14:21:29 2001
> ***************
> *** 91,97 ****
>   	 * A Prepared SQL query is executed and its ResultSet is returned
>   	 *
>   	 * @return a ResultSet that contains the data produced by the
> ! 	 *	query - never null
>   	 * @exception SQLException if a database access error occurs
>   	 */
>   	public java.sql.ResultSet executeQuery() throws SQLException
> --- 91,97 ----
>            * A Prepared SQL query is executed and its ResultSet is returned
>            *
>            * @return a ResultSet that contains the data produced by the
> !          *             *     	query - never null
>            * @exception SQLException if a database access error occurs
>            */
>           public java.sql.ResultSet executeQuery() throws SQLException
> ***************
> *** 105,111 ****
>   	 * be executed.
>   	 *
>   	 * @return either the row count for INSERT, UPDATE or DELETE; or
> ! 	 * 	0 for SQL statements that return nothing.
>   	 * @exception SQLException if a database access error occurs
>   	 */
>   	public int executeUpdate() throws SQLException
> --- 105,111 ----
>            * be executed.
>            *
>            * @return either the row count for INSERT, UPDATE or DELETE; or
> !          *             *     	0 for SQL statements that return nothing.
>            * @exception SQLException if a database access error occurs
>            */
>           public int executeUpdate() throws SQLException
> ***************
> *** 305,310 ****
> --- 305,319 ----
>      */
>     public void setBytes(int parameterIndex, byte x[]) throws SQLException
>     {
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports the bytea datatype for byte arrays
> +       if(null == x){
> +         setNull(parameterIndex,Types.OTHER);
> +       } else {
> +         setString(parameterIndex, PGbytea.toPGString(x));
> +       }
> +     } else {
> +       //Version 7.1 and earlier support done as LargeObjects
>         LargeObjectManager lom = connection.getLargeObjectAPI();
>         int oid = lom.create();
>         LargeObject lob = lom.open(oid);
> ***************
> *** 312,317 ****
> --- 321,327 ----
>         lob.close();
>         setInt(parameterIndex,oid);
>       }
> +   }
>   
>           /**
>            * Set a parameter to a java.sql.Date value.  The driver converts this
> ***************
> *** 413,420 ****
> --- 423,451 ----
>            */
>           public void setAsciiStream(int parameterIndex, InputStream x, int length) throws SQLException
>           {
> +           if (connection.haveMinimumCompatibleVersion("7.2")) {
> +             //Version 7.2 supports AsciiStream for all PG text types (char, varchar, text)
> +             //As the spec/javadoc for this method indicate this is to be used for
> +             //large String values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +             //long varchar datatype, but with toast all text datatypes are capable of
> +             //handling very large values.  Thus the implementation ends up calling
> +             //setString() since there is no current way to stream the value to the server
> +             try {
> +               InputStreamReader l_inStream = new InputStreamReader(x, "ASCII");
> +               char[] l_chars = new char[length];
> +               int l_charsRead = l_inStream.read(l_chars,0,length);
> +               setString(parameterIndex, new String(l_chars,0,l_charsRead));
> +             } catch (UnsupportedEncodingException l_uee) {
> +               throw new PSQLException("postgresql.unusual",l_uee);
> +             } catch (IOException l_ioe) {
> +               throw new PSQLException("postgresql.unusual",l_ioe);
> +             }
> +           } else {
> +             //Version 7.1 supported only LargeObjects by treating everything
> +             //as binary data
>               setBinaryStream(parameterIndex, x, length);
>             }
> +         }
>   
>           /**
>            * When a very large Unicode value is input to a LONGVARCHAR parameter,
> ***************
> *** 436,443 ****
> --- 467,495 ----
>            */
>           public void setUnicodeStream(int parameterIndex, InputStream x, int length) throws SQLException
>           {
> +           if (connection.haveMinimumCompatibleVersion("7.2")) {
> +             //Version 7.2 supports AsciiStream for all PG text types (char, varchar, text)
> +             //As the spec/javadoc for this method indicate this is to be used for
> +             //large String values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +             //long varchar datatype, but with toast all text datatypes are capable of
> +             //handling very large values.  Thus the implementation ends up calling
> +             //setString() since there is no current way to stream the value to the server
> +             try {
> +               InputStreamReader l_inStream = new InputStreamReader(x, "UTF-8");
> +               char[] l_chars = new char[length];
> +               int l_charsRead = l_inStream.read(l_chars,0,length);
> +               setString(parameterIndex, new String(l_chars,0,l_charsRead));
> +             } catch (UnsupportedEncodingException l_uee) {
> +               throw new PSQLException("postgresql.unusual",l_uee);
> +             } catch (IOException l_ioe) {
> +               throw new PSQLException("postgresql.unusual",l_ioe);
> +             }
> +           } else {
> +             //Version 7.1 supported only LargeObjects by treating everything
> +             //as binary data
>               setBinaryStream(parameterIndex, x, length);
>             }
> +         }
>   
>           /**
>            * When a very large binary value is input to a LONGVARBINARY parameter,
> ***************
> *** 455,460 ****
> --- 507,538 ----
>            */
>           public void setBinaryStream(int parameterIndex, InputStream x, int length) throws SQLException
>           {
> +           if (connection.haveMinimumCompatibleVersion("7.2")) {
> +             //Version 7.2 supports BinaryStream for for the PG bytea type
> +             //As the spec/javadoc for this method indicate this is to be used for
> +             //large binary values (i.e. LONGVARBINARY)  PG doesn't have a separate
> +             //long binary datatype, but with toast the bytea datatype is capable of
> +             //handling very large values.  Thus the implementation ends up calling
> +             //setBytes() since there is no current way to stream the value to the server
> +             byte[] l_bytes = new byte[length];
> +             int l_bytesRead;
> +             try {
> +               l_bytesRead = x.read(l_bytes,0,length);
> +             } catch (IOException l_ioe) {
> +               throw new PSQLException("postgresql.unusual",l_ioe);
> +             }
> +             if (l_bytesRead == length) {
> +               setBytes(parameterIndex, l_bytes);
> +             } else {
> +               //the stream contained less data than they said
> +               byte[] l_bytes2 = new byte[l_bytesRead];
> +               System.arraycopy(l_bytes,0,l_bytes2,0,l_bytesRead);
> +               setBytes(parameterIndex, l_bytes2);
> +             }
> +           } else {
> +             //Version 7.1 only supported streams for LargeObjects
> +             //but the jdbc spec indicates that streams should be
> +             //available for LONGVARBINARY instead
>               LargeObjectManager lom = connection.getLargeObjectAPI();
>               int oid = lom.create();
>               LargeObject lob = lom.open(oid);
> ***************
> *** 472,482 ****
>               }
>               los.close();
>             } catch(IOException se) {
> !             throw new PSQLException("postgresql.prep.is",se);
>             }
>             // lob is closed by the stream so don't call lob.close()
>             setInt(parameterIndex,oid);
>   	}
>   
>   	/**
>   	 * In general, parameter values remain in force for repeated used of a
> --- 550,561 ----
>                 }
>                 los.close();
>               } catch(IOException se) {
> !               throw new PSQLException("postgresql.unusual",se);
>               }
>               // lob is closed by the stream so don't call lob.close()
>               setInt(parameterIndex,oid);
>             }
> +         }
>   
>           /**
>            * In general, parameter values remain in force for repeated used of a
> ***************
> *** 728,738 ****
>       }
>   
>       /**
> !      * Sets a Blob - basically its similar to setBinaryStream()
>        */
>       public void setBlob(int i,Blob x) throws SQLException
>       {
> !       setBinaryStream(i,x.getBinaryStream(),(int)x.length());
>       }
>   
>       /**
> --- 807,839 ----
>       }
>   
>       /**
> !      * Sets a Blob
>        */
>       public void setBlob(int i,Blob x) throws SQLException
>       {
> !             InputStream l_inStream = x.getBinaryStream();
> !             int l_length = (int) x.length();
> !             LargeObjectManager lom = connection.getLargeObjectAPI();
> !             int oid = lom.create();
> !             LargeObject lob = lom.open(oid);
> !             OutputStream los = lob.getOutputStream();
> !             try {
> !               // could be buffered, but then the OutputStream returned by LargeObject
> !               // is buffered internally anyhow, so there would be no performance
> !               // boost gained, if anything it would be worse!
> !               int c=l_inStream.read();
> !               int p=0;
> !               while(c>-1 && p<l_length) {
> !                 los.write(c);
> !                 c=l_inStream.read();
> !                 p++;
> !               }
> !               los.close();
> !             } catch(IOException se) {
> !               throw new PSQLException("postgresql.unusual",se);
> !             }
> !             // lob is closed by the stream so don't call lob.close()
> !             setInt(i,oid);
>       }
>   
>       /**
> ***************
> *** 741,746 ****
> --- 842,866 ----
>        */
>       public void setCharacterStream(int i,java.io.Reader x,int length) throws SQLException
>       {
> +           if (connection.haveMinimumCompatibleVersion("7.2")) {
> +             //Version 7.2 supports CharacterStream for for the PG text types
> +             //As the spec/javadoc for this method indicate this is to be used for
> +             //large text values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +             //long varchar datatype, but with toast all the text datatypes are capable of
> +             //handling very large values.  Thus the implementation ends up calling
> +             //setString() since there is no current way to stream the value to the server
> +             char[] l_chars = new char[length];
> +             int l_charsRead;
> +             try {
> +               l_charsRead = x.read(l_chars,0,length);
> +             } catch (IOException l_ioe) {
> +               throw new PSQLException("postgresql.unusual",l_ioe);
> +             }
> +             setString(i, new String(l_chars,0,l_charsRead));
> +           } else {
> +             //Version 7.1 only supported streams for LargeObjects
> +             //but the jdbc spec indicates that streams should be
> +             //available for LONGVARCHAR instead
>               LargeObjectManager lom = connection.getLargeObjectAPI();
>               int oid = lom.create();
>               LargeObject lob = lom.open(oid);
> ***************
> *** 758,775 ****
>               }
>               los.close();
>             } catch(IOException se) {
> !             throw new PSQLException("postgresql.prep.is",se);
>             }
>             // lob is closed by the stream so don't call lob.close()
>             setInt(i,oid);
>       }
>   
>       /**
>        * New in 7.1
>        */
>       public void setClob(int i,Clob x) throws SQLException
>       {
> !       setBinaryStream(i,x.getAsciiStream(),(int)x.length());
>       }
>   
>       /**
> --- 878,918 ----
>                 }
>                 los.close();
>               } catch(IOException se) {
> !               throw new PSQLException("postgresql.unusual",se);
>               }
>               // lob is closed by the stream so don't call lob.close()
>               setInt(i,oid);
>             }
> +     }
>   
>       /**
>        * New in 7.1
>        */
>       public void setClob(int i,Clob x) throws SQLException
>       {
> !             InputStream l_inStream = x.getAsciiStream();
> !             int l_length = (int) x.length();
> !             LargeObjectManager lom = connection.getLargeObjectAPI();
> !             int oid = lom.create();
> !             LargeObject lob = lom.open(oid);
> !             OutputStream los = lob.getOutputStream();
> !             try {
> !               // could be buffered, but then the OutputStream returned by LargeObject
> !               // is buffered internally anyhow, so there would be no performance
> !               // boost gained, if anything it would be worse!
> !               int c=l_inStream.read();
> !               int p=0;
> !               while(c>-1 && p<l_length) {
> !                 los.write(c);
> !                 c=l_inStream.read();
> !                 p++;
> !               }
> !               los.close();
> !             } catch(IOException se) {
> !               throw new PSQLException("postgresql.unusual",se);
> !             }
> !             // lob is closed by the stream so don't call lob.close()
> !             setInt(i,oid);
>       }
>   
>       /**
> *** ./src/interfaces/jdbc/org/postgresql/jdbc2/ResultSet.java.orig	Sat Sep  8 23:38:56 2001
> --- ./src/interfaces/jdbc/org/postgresql/jdbc2/ResultSet.java	Sat Sep  8 23:52:24 2001
> ***************
> *** 312,321 ****
>     {
>       if (columnIndex < 1 || columnIndex > fields.length)
>         throw new PSQLException("postgresql.res.colrange");
> -     wasNullFlag = (this_row[columnIndex - 1] == null);
>   
>       // Handle OID's as BLOBS
> !     if(!wasNullFlag)
>         if( fields[columnIndex - 1].getOID() == 26) {
>   	LargeObjectManager lom = connection.getLargeObjectAPI();
>   	LargeObject lob = lom.open(getInt(columnIndex));
> --- 312,326 ----
>     {
>       if (columnIndex < 1 || columnIndex > fields.length)
>         throw new PSQLException("postgresql.res.colrange");
>   
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports the bytea datatype for byte arrays
> +       return PGbytea.toBytes(getString(columnIndex));
> +     } else {
> +       //Version 7.1 and earlier supports LargeObjects for byte arrays
> +       wasNullFlag = (this_row[columnIndex - 1] == null);
>         // Handle OID's as BLOBS
> !       if(!wasNullFlag) {
>           if( fields[columnIndex - 1].getOID() == 26) {
>             LargeObjectManager lom = connection.getLargeObjectAPI();
>             LargeObject lob = lom.open(getInt(columnIndex));
> ***************
> *** 323,330 ****
>   	lob.close();
>   	return buf;
>         }
> ! 
> !     return this_row[columnIndex - 1];
>     }
>   
>     /**
> --- 328,336 ----
>             lob.close();
>             return buf;
>           }
> !       }
> !     }
> !     return null;
>     }
>   
>     /**
> ***************
> *** 392,399 ****
> --- 398,424 ----
>      */
>     public InputStream getAsciiStream(int columnIndex) throws SQLException
>     {
> +     wasNullFlag = (this_row[columnIndex - 1] == null);
> +     if (wasNullFlag)
> +       return null;
> + 
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports AsciiStream for all the PG text types
> +       //As the spec/javadoc for this method indicate this is to be used for
> +       //large text values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +       //long string datatype, but with toast the text datatype is capable of
> +       //handling very large values.  Thus the implementation ends up calling
> +       //getString() since there is no current way to stream the value from the server
> +       try {
> +         return new ByteArrayInputStream(getString(columnIndex).getBytes("ASCII"));
> +       } catch (UnsupportedEncodingException l_uee) {
> +         throw new PSQLException("postgresql.unusual", l_uee);
> +       }
> +     } else {
> +       // In 7.1 Handle as BLOBS so return the LargeObject input stream
>         return getBinaryStream(columnIndex);
>       }
> +   }
>   
>     /**
>      * A column value can also be retrieved as a stream of Unicode
> ***************
> *** 412,419 ****
> --- 437,463 ----
>      */
>     public InputStream getUnicodeStream(int columnIndex) throws SQLException
>     {
> +     wasNullFlag = (this_row[columnIndex - 1] == null);
> +     if (wasNullFlag)
> +       return null;
> + 
> +     if (connection.haveMinimumCompatibleVersion("7.2")) {
> +       //Version 7.2 supports AsciiStream for all the PG text types
> +       //As the spec/javadoc for this method indicate this is to be used for
> +       //large text values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +       //long string datatype, but with toast the text datatype is capable of
> +       //handling very large values.  Thus the implementation ends up calling
> +       //getString() since there is no current way to stream the value from the server
> +       try {
> +         return new ByteArrayInputStream(getString(columnIndex).getBytes("UTF-8"));
> +       } catch (UnsupportedEncodingException l_uee) {
> +         throw new PSQLException("postgresql.unusual", l_uee);
> +       }
> +     } else {
> +       // In 7.1 Handle as BLOBS so return the LargeObject input stream
>         return getBinaryStream(columnIndex);
>       }
> +   }
>   
>     /**
>      * A column value can also be retrieved as a binary strea.  This
> ***************
> *** 429,448 ****
>      */
>     public InputStream getBinaryStream(int columnIndex) throws SQLException
>     {
> !     // New in 7.1 Handle OID's as BLOBS so return the input stream
> !     if(!wasNullFlag)
>         if( fields[columnIndex - 1].getOID() == 26) {
>   	LargeObjectManager lom = connection.getLargeObjectAPI();
>   	LargeObject lob = lom.open(getInt(columnIndex));
>           return lob.getInputStream();
>         }
> ! 
> !     // Not an OID so fake the stream
> !     byte b[] = getBytes(columnIndex);
> ! 
> !     if (b != null)
> !       return new ByteArrayInputStream(b);
> !     return null;		// SQL NULL
>     }
>   
>     /**
> --- 473,501 ----
>      */
>     public InputStream getBinaryStream(int columnIndex) throws SQLException
>     {
> !     wasNullFlag = (this_row[columnIndex - 1] == null);
> !     if (wasNullFlag)
> !       return null;
> ! 
> !     if (connection.haveMinimumCompatibleVersion("7.2")) {
> !       //Version 7.2 supports BinaryStream for all PG bytea type
> !       //As the spec/javadoc for this method indicate this is to be used for
> !       //large binary values (i.e. LONGVARBINARY)  PG doesn't have a separate
> !       //long binary datatype, but with toast the bytea datatype is capable of
> !       //handling very large values.  Thus the implementation ends up calling
> !       //getBytes() since there is no current way to stream the value from the server
> !       byte b[] = getBytes(columnIndex);
> !       if (b != null)
> !         return new ByteArrayInputStream(b);
> !     } else {
> !       // In 7.1 Handle as BLOBS so return the LargeObject input stream
>         if( fields[columnIndex - 1].getOID() == 26) {
>           LargeObjectManager lom = connection.getLargeObjectAPI();
>           LargeObject lob = lom.open(getInt(columnIndex));
>           return lob.getInputStream();
>         }
> !     }
> !     return null;
>     }
>   
>     /**
> ***************
> *** 731,737 ****
>   	//if index<0, count from the end of the result set, but check
>   	//to be sure that it is not beyond the first index
>   	if (index<0)
> ! 	    if (index > -rows_size)
>   		internalIndex = rows_size+index;
>   	    else {
>   		beforeFirst();
> --- 784,790 ----
>   	//if index<0, count from the end of the result set, but check
>   	//to be sure that it is not beyond the first index
>   	if (index<0)
> ! 	    if (index >= -rows_size)
>   		internalIndex = rows_size+index;
>   	    else {
>   		beforeFirst();
> ***************
> *** 794,799 ****
> --- 847,856 ----
>   
>       public java.sql.Array getArray(int i) throws SQLException
>       {
> +         wasNullFlag = (this_row[i - 1] == null);
> +         if(wasNullFlag)
> +           return null;
> + 
>           if (i < 1 || i > fields.length)
>                   throw new PSQLException("postgresql.res.colrange");
>                   return (java.sql.Array) new org.postgresql.jdbc2.Array( connection, i, fields[i-1], this );
> ***************
> *** 826,835 ****
> --- 883,907 ----
>   
>       public java.io.Reader getCharacterStream(int i) throws SQLException
>       {
> +       wasNullFlag = (this_row[i - 1] == null);
> +       if (wasNullFlag)
> +         return null;
> + 
> +       if (connection.haveMinimumCompatibleVersion("7.2")) {
> +         //Version 7.2 supports AsciiStream for all the PG text types
> +         //As the spec/javadoc for this method indicate this is to be used for
> +         //large text values (i.e. LONGVARCHAR)  PG doesn't have a separate
> +         //long string datatype, but with toast the text datatype is capable of
> +         //handling very large values.  Thus the implementation ends up calling
> +         //getString() since there is no current way to stream the value from the server
> +         return new CharArrayReader(getString(i).toCharArray());
> +       } else {
> +         // In 7.1 Handle as BLOBS so return the LargeObject input stream
>           Encoding encoding = connection.getEncoding();
>           InputStream input = getBinaryStream(i);
>           return encoding.getDecodingReader(input);
>         }
> +     }
>   
>       /**
>        * New in 7.1
> ***************
> *** 1485,1488 ****
> --- 1557,1563 ----
>                           }
>                   }
>           }
> + 
> + 
>   }
> + 
> *** ./src/interfaces/jdbc/org/postgresql/ResultSet.java.orig	Sat Sep  8 23:12:41 2001
> --- ./src/interfaces/jdbc/org/postgresql/ResultSet.java	Fri Sep  7 10:46:01 2001
> ***************
> *** 192,198 ****
>         String s = getString(col);
>   
>         // Handle SQL Null
> !       if(s==null)
>           return null;
>   
>         // Handle Money
> --- 192,199 ----
>         String s = getString(col);
>   
>         // Handle SQL Null
> !       wasNullFlag = (this_row[col - 1] == null);
> !       if(wasNullFlag)
>           return null;
>   
>         // Handle Money

> package org.postgresql.util;
> 
> import java.sql.*;
> 
> /**
>  * Converts to and from the postgresql bytea datatype used by the backend.
>  *
>  * $Id: Encoding.java,v 1.1 2001/07/21 18:52:11 momjian Exp $
>  */
> 
> public class PGbytea {
> 
>         /**
>          * Converts a PG bytea string (i.e. the text representation
>          * of the bytea data type) into a java byte[]
>          */
>         public static byte[] toBytes(String s) throws SQLException {
>           if(s==null)
>             return null;
>           int slength = s.length();
>           byte[] buf = new byte[slength];
>           int bufpos = 0;
>           int thebyte;
>           char nextchar;
>           char secondchar;
>           for (int i = 0; i < slength; i++) {
>             nextchar = s.charAt(i);
>             if (nextchar == '\\') {
>               secondchar = s.charAt(++i);
>               if (secondchar == '\\') {
>                 //escaped \
>                 buf[bufpos++] = (byte)'\\';
>               } else {
>                 thebyte = (secondchar-48)*64 + (s.charAt(++i)-48)*8 + (s.charAt(++i)-48);
>                 if (thebyte > 127)
>                   thebyte -= 256;
>                 buf[bufpos++] = (byte)thebyte;
>               }
>             } else {
>               buf[bufpos++] = (byte)nextchar;
>             }
>           }
>           byte[] l_return = new byte[bufpos];
>           System.arraycopy(buf,0,l_return,0,bufpos);
>           return l_return;
>         }
> 
>         /**
>          * Converts a java byte[] into a PG bytea string (i.e. the text
>          * representation of the bytea data type)
>          */
>         public static String toPGString(byte[] p_buf) throws SQLException
>         {
>           if(p_buf==null)
>             return null;
>           StringBuffer l_strbuf = new StringBuffer();
>           for (int i = 0; i < p_buf.length; i++) {
>             int l_int = (int)p_buf[i];
>             if (l_int < 0) {
>               l_int = 256 + l_int;
>             }
>             //we escape the same non-printable characters as the backend
>             //we must escape all 8bit characters otherwise when convering
>             //from java unicode to the db character set we may end up with
>             //question marks if the character set is SQL_ASCII
>             if (l_int < 040 || l_int > 0176) {
>               //escape charcter with the form \000, but need two \\ because of
>               //the parser
>               l_strbuf.append("\\");
>               l_strbuf.append((char)(((l_int >> 6) & 0x3)+48));
>               l_strbuf.append((char)(((l_int >> 3) & 0x7)+48));
>               l_strbuf.append((char)((l_int & 0x07)+48));
>             } else if (p_buf[i] == (byte)'\\') {
>               //escape the backslash character as \\, but need four \\\\ because
>               //of the parser
>               l_strbuf.append("\\\\");
>             } else {
>               //other characters are left alone
>               l_strbuf.append((char)p_buf[i]);
>             }
>           }
>           return l_strbuf.toString();
>         }
> 
> 
> }

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman(at)candle(dot)pha(dot)pa(dot)us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

In response to

pgsql-patches by date

Next:From: Bruce MomjianDate: 2001-09-10 14:24:01
Subject: Re: Fix DatabaseMetaDataTest in JDBC test suite
Previous:From: Bruce MomjianDate: 2001-09-10 14:17:02
Subject: Re: [HACKERS] to_char and Roman Numeral (RN) bug

pgsql-jdbc by date

Next:From: Bruce MomjianDate: 2001-09-10 14:24:01
Subject: Re: Fix DatabaseMetaDataTest in JDBC test suite
Previous:From: Thomas O'DowdDate: 2001-09-10 08:27:17
Subject: Patch for Statement Escape Processing problems

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group