Re: Table UPDATE is too slow

From: "Matt Clark" <matt(at)ymogen(dot)net>
To: "'Ron St-Pierre'" <rstpierre(at)syscor(dot)com>, "'Steinar H(dot) Gunderson'" <sgunderson(at)bigfoot(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Table UPDATE is too slow
Date: 2004-08-31 18:59:55
Message-ID: 001601c48f8c$b4738700$8300a8c0@solent
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-performance

> >That looks like poor database normalization, really. Are you
> sure you
> >don't want to split this into multiple tables instead of having 62
> >columns?
> >
> No, it is properly normalized. The data in this table is stock
> fundamentals, stuff like 52 week high, ex-dividend date, etc, etc.

Hmm, the two examples you gave there are actually ripe for breaking out into
another table. It's not quite 'normalisation', but if you have data that
changes very rarely, why not group it into a separate table? You could have
the highly volatile data in one table, the semi-volatile stuff in another,
and the pretty static stuff in a third. Looked at another way, if you have
sets of fields that tend to change together, group them into tables
together. That way you will radically reduce the number of indexes that are
affected by each update.

But as someone else pointed out, you should at the very least wrap your
updates in a big transaction.

M

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Bruce Momjian 2004-08-31 19:07:59 Re: Not able to build libpq for Windows
Previous Message Jerry LeVan 2004-08-31 18:59:52 Types and SRF's

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2004-08-31 19:00:14 Re: odbc/ado problems
Previous Message Jean-Max Reymond 2004-08-31 18:59:11 Optimizing a request