From: | Heikki Linnakangas <heikki(at)enterprisedb(dot)com> |
---|---|
To: | Kees Kling <ckling(at)planet(dot)nl> |
Cc: | pgsql-jdbc(at)postgresql(dot)org |
Subject: | Re: storing large arrays of floats |
Date: | 2007-11-02 20:56:01 |
Message-ID: | 472B8EE1.5090307@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Kees Kling wrote:
> I have to store an array of 50,000 floats in a postgres database. At
> firts I tried to store it as float[], but that take s to much time for
> jdbc to convert the data so that it can be stored in the database.
> Next I tried to store it as byte[] array in a BYTEA column. This worked
> ok, but when I inspect the data at the serverside the datasize increased
> with a factor 1,5 to 2 and I see a lot of escape(\0) characters. What I
> mean to do is storing the bunch of data and retrieving subselections of
> the data with the help of Plperl fucntions. Now I can't restore the data
> at serverside. What is the clue? Do I have to store the data with
> another function as "preparedstatement.setBytes" or must I use another
> columntype.
Sounds like you need to rethink your schema. Instead of using an array,
consider using a child table.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Neufeld | 2007-11-05 18:46:33 | Parsed Query Trees |
Previous Message | Kees Kling | 2007-11-02 20:05:37 | storing large arrays of floats |