Re: Compare rows

From: Shridhar Daithankar <shridhar_daithankar(at)persistent(dot)co(dot)in>
To: Greg Spiegelberg <gspiegelberg(at)cranel(dot)com>
Cc: PgSQL Performance ML <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Compare rows
Date: 2003-10-08 16:37:45
Message-ID: 3F843D59.3060204@persistent.co.in
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Greg Spiegelberg wrote:

> The data represents metrics at a point in time on a system for
> network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
> speed, and whatever else can be gathered.
>
> We arrived at this one 642 column table after testing the whole
> process from data gathering, methods of temporarily storing then
> loading to the database. Initially, 37+ tables were in use but
> the one big-un has saved us over 3.4 minutes.

I am sure you changed the desing because those 3.4 minutes were significant to you.

But I suggest you go back to 37 table design and see where bottleneck is.
Probably you can tune a join across 37 tables much better than optimizing a
difference between two 637 column rows.

Besides such a large number of columns will cost heavily in terms of
defragmentation across pages. The wasted space and IO therof could be
significant issue for large number of rows.

642 column is a bad design. Theoretically and from implementation of postgresql
point of view. You did it because of speed problem. Now if we can resolve those
speed problems, perhaps you could go back to other design.

Is it feasible for you right now or you are too much committed to the big table?

And of course, then it is routing postgresql tuning exercise..:-)

Shridhar

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Neil Conway 2003-10-08 16:41:56 Re: Sun performance - Major discovery!
Previous Message Greg Spiegelberg 2003-10-08 16:27:41 Re: Compare rows