On Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran <
> Now I have to run the same pgplsql on all possible combinations of
> employees and with 542 employees that is about say 300,000 unique pairs.
> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and
> show it on a screen. No user wants to wait for 3 hours, they can probably
> wait for 10 minutes (even that is too much for a UI application). How do I
> solve this scaling problem? Can I have multiple parellel sessions and each
> session have multiple/processes that do a pair each at 40 ms and then
> collate the results. Does PostGres or pgplsql have any parallel computing
How frequently does the data change? Hourly, daily, monthly?
How granular are the time frames in the typical query? Seconds, minutes,
hours, days, weeks?
I'm thinking if you can prepare the data ahead of time as it changes via a
trigger or client-side code then your problem will go away pretty quickly.
In response to
pgsql-performance by date
|Next:||From: Claudio Freire||Date: 2012-04-26 17:37:54|
|Subject: Weird plan variation with recursive CTEs|
|Previous:||From: Yeb Havinga||Date: 2012-04-26 06:49:12|
|Subject: Re: Parallel Scaling of a pgplsql problem|