Skip site navigation (1) Skip section navigation (2)

Re: Big array speed issues

From: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
To: "Merkel Marcel (CR/AEM4)" <Marcel(dot)Merkel(at)de(dot)bosch(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Big array speed issues
Date: 2006-06-21 19:49:04
Message-ID: b42b73150606211249w34e7b750t3cd1b7682c39497e@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
> Not yet. I would first like to know what is the time consuming part and
> what is a work around. If you are sure individual columns for every
> entry of the array solve the issue I will joyfully implement it. The
> downsize of this approch is that the array dimensions are not always the
> same in my scenario. But I have a workaround in mind for this issue.

The first thing I would try would be to completely normalize te file, aka

create table data as
(
  id int,
  t timestamp,
  map_x int,
  map_y int,
  value float
);

and go with denormalized approach only when this doesn't work for some reason.

merlin

In response to

pgsql-performance by date

Next:From: Tom LaneDate: 2006-06-21 20:08:16
Subject: Re: Performance of DOMAINs
Previous:From: jody brownellDate: 2006-06-21 19:41:45
Subject: Re: Help tuning autovacuum - seeing lots of relationbloat

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group