Re: Optimizations

From: Ogden <lists(at)darkstatic(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Optimizations
Date: 2010-03-05 14:09:52
Message-ID: 26852C9E-8489-4FDD-B9AB-FA5642BBC4AD@darkstatic.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mar 5, 2010, at 2:26 AM, Craig Ringer wrote:

> Ogden wrote:
>> We run a student scoring system with PostgreSQL as a backend. After the results for each student are inputted into the system, we display many reports for them. We haven't had a problem with efficiency or speed, but it has come up that perhaps storing the rolled up scores of each student may be better than calculating their score on the fly. I have always coded the SQL to calculate on the fly and do not see any benefit from calculating on the fly. For a test with over 100 questions and with 950 students having taken it, it calculates all their relevant score information in less than half a second. Would there be any obvious benefit to caching the results?
>
> Caching the results would mean storing the same information in two
> places (individual scores, and aggregates calculated from them). That's
> room for error if they're permitted to get out of sync in any way for
> any reason. For that reason, and because it's complexity you don't need,
> I'd avoid it unless I had a reason not to.
>
> On the other hand if you expect the number of students you have to
> report on to grow vastly then it's worth considering.
>
> If you do go ahead with it, first restructure all queries that use that
> information so they go view a view that calculates that data on the fly.
>
> Then look at replacing that view with a table that's automatically
> updated by triggers when the data source is updated (say, a student has
> a new score recorded).

Craig,

Thank you for the response and insight.

While it sounds good in practice, I know storing the results will vastly increase the size (the table holding the results is over 5Gb in one case) and calculating results from it takes not more than a second for a huge data set.

Would searching a huge table be as fast as calculating or about the same? I'll have to run some tests on my end but I am very impressed by the speed of which PostgreSQL executes aggregate functions.

Do you suggest looking at this option when we see the reporting to slow down? At that point do you suggest we go back to the drawing board?

Thank you

Ogden

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Chris Roffler 2010-03-05 14:18:54 Re: Xpath Index in PostgreSQL
Previous Message Cyril Scetbon 2010-03-05 14:09:16 kernel version impact on PostgreSQL performance