Skip site navigation (1) Skip section navigation (2)

Re: Table UPDATE is too slow

From: "Matt Clark" <matt(at)ymogen(dot)net>
To: "'Ron St-Pierre'" <rstpierre(at)syscor(dot)com>,"'Steinar H(dot) Gunderson'" <sgunderson(at)bigfoot(dot)com>,<pgsql-performance(at)postgresql(dot)org>
Subject: Re: Table UPDATE is too slow
Date: 2004-08-31 18:59:55
Message-ID: 001601c48f8c$b4738700$8300a8c0@solent (view raw or flat)
Thread:
Lists: pgsql-generalpgsql-performance
> >That looks like poor database normalization, really. Are you 
> sure you 
> >don't want to split this into multiple tables instead of having 62 
> >columns?
> >
> No, it is properly normalized. The data in this table is stock 
> fundamentals, stuff like 52 week high, ex-dividend date, etc, etc.

Hmm, the two examples you gave there are actually ripe for breaking out into
another table.  It's not quite 'normalisation', but if you have data that
changes very rarely, why not group it into a separate table?  You could have
the highly volatile data in one table, the semi-volatile stuff in another,
and the pretty static stuff in a third.  Looked at another way, if you have
sets of fields that tend to change together, group them into tables
together.  That way you will radically reduce the number of indexes that are
affected by each update.

But as someone else pointed out, you should at the very least wrap your
updates in a big transaction.

M


In response to

pgsql-performance by date

Next:From: Merlin MoncureDate: 2004-08-31 19:00:14
Subject: Re: odbc/ado problems
Previous:From: Jean-Max ReymondDate: 2004-08-31 18:59:11
Subject: Optimizing a request

pgsql-general by date

Next:From: Bruce MomjianDate: 2004-08-31 19:07:59
Subject: Re: Not able to build libpq for Windows
Previous:From: Jerry LeVanDate: 2004-08-31 18:59:52
Subject: Types and SRF's

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group