I've got a (very simple) postgresql database that records logging
information from a group of distributed applications.
It's already beginning to get somewhat large (and the projected
lifetime of the application set is 25 years...). Some
sort of rollover is going to be needed archive old log information.
I can see two easy approaches (feel free to suggest better ones!):
(a) rename the table as an 'archive' log table and then recreate the
'active' log table.
(b) extract the old log information into an archive table,
removing it from the original
In both cases, the archive table will later be unloaded (first
compressed and then to tape) to conserve disk space. This will be
Any feelings on which is a better way to go? (a) should be
nice and fast (right?), but (b) has the advantage of allowing partial
extractions - so only log information over a week or month old
would be archived each time, plus there should be no problem with
insertions happening during the rollover process (right?).
I like (b), personally, but would like to know if anyone
sees any "gotcha's", especially w.r.t. postgresql as the
Steve Wampler- SOLIS Project, National Solar Observatory
pgsql-general by date
|Next:||From: Tom Lane||Date: 2000-05-26 17:36:30|
|Subject: Re: Speed of locating tables? |
|Previous:||From: Barry Lind||Date: 2000-05-26 16:47:17|
|Subject: Re: Speed of locating tables?|