well, its been a fun 48hours here and no mistake. It appears that
a view that I created now takes too long to process on the current
here (its not my fault ;). Many screaming people and clients all running
about and cursing.
So. I decided that the fix was to logically seperate the historical
data from the live data (yes, it was 'requested' that the database be
that way, but I am rapidly in the course of changing it ;).
Now, i have no problem with designing Triggers to fire off
into the new (non-historical) tables, to keep them fresh as it were, and
view on those tables instead. This way, i dont have 7-10 million old
records to process during the 'view's runtime.
it will hopefully make things smoother (that and I also noticed that
'programmer' who wrote the perl code that deals with the select returns
doesnt use cursors
but perl structures to do it *ugh*!). The question therefore is:
what is the best/quickest way to achieve Table Level replication ?
i, as i said above, am thinking of having copies of all the tables and
making them 'non-historical' and then building the view on them instead.
this work across database instances ?!
Thanks for your thoughts and comments on this. I am ~so~ annoyed
at the previous database designer, that you can hear my teeth grinding
(oh. one last question, totally unrelated, is it possible to add in
in the HTML output from a select ?! jst a passing thought for more speed
if i can do that ;)
Many thanks and deepest regards,
pgsql-interfaces by date
|Next:||From: Michael Meskes||Date: 2000-08-04 23:26:32|
|Subject: Re: Embedded SQL C++ preprocessor ?|
|Previous:||From: Bob Kline||Date: 2000-08-04 17:55:53|
|Subject: Re: Python + PostgreSQL|