Re: Big array speed issues

From: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
To: "Merkel Marcel (CR/AEM4)" <Marcel(dot)Merkel(at)de(dot)bosch(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Big array speed issues
Date: 2006-06-21 19:49:04
Message-ID: b42b73150606211249w34e7b750t3cd1b7682c39497e@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> Not yet. I would first like to know what is the time consuming part and
> what is a work around. If you are sure individual columns for every
> entry of the array solve the issue I will joyfully implement it. The
> downsize of this approch is that the array dimensions are not always the
> same in my scenario. But I have a workaround in mind for this issue.

The first thing I would try would be to completely normalize te file, aka

create table data as
(
id int,
t timestamp,
map_x int,
map_y int,
value float
);

and go with denormalized approach only when this doesn't work for some reason.

merlin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2006-06-21 20:08:16 Re: Performance of DOMAINs
Previous Message jody brownell 2006-06-21 19:41:45 Re: Help tuning autovacuum - seeing lots of relationbloat