Best practice to get performance

From: Fredric Fredricson <Fredric(dot)Fredricson(at)bonetmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Best practice to get performance
Date: 2010-11-18 22:56:12
Message-ID: 4CE5AF0C.6090705@bonetmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,
I have designed a handful databases but is absolutely no SQL-expert. Nor
have I had any formal database training and have never worked with
someone who had. What I know about SQL I have read in the documentation,
found with google, and learned from my numerous mistakes.

This question I have is somewhat related to the "unlogged tables"
proposal that is discussed in another thread.

The background is that I am designing a data storage that, unlike all
other data storage, have some performance requirements (yes, that was a
joke ;-).
What I have done to handle this is to create "lookup" tables that cache
preprocessed information. The simplest is row count but also results
from selects with joins and group clauses. These tables are updated
either on demand (first call), by triggers, or periodically.

I assumed this was fairly standard practice and when I read about
unlogged tables these tables was the first use that came to my mind.
Since the lookup tables are used for performance and contain redundant
data loosing the data at a restart is no real problem.

What puzzle me though is that this use is never mentioned in the
discussions, at least as far as I can see. Am I doing something
"strange"? Is this something you should not have to do if you have
"proper" database design?

Regards
/Fredric

Responses

Browse pgsql-general by date

  From Date Subject
Next Message John R Pierce 2010-11-18 23:09:56 Re: Best practice to get performance
Previous Message andrew 2010-11-18 21:59:54 Re: pgadmin story...