Arrrghhh, it was was actually 10 (not that it really makes any
difference), should have actually waited for file to unzip before posting!!!
Ben Webber wrote:
> Sorry, meant 2 gigs, not 10.
> An interesting suggestion, but the problem with storing the logfiles in
> a table for us would be that uncompressed, the log file for a day is
> about 10 gigs. This would mean that an unacceptable amount of excess
> data would accumulate in the database. It would be feasible however to
> write a script to import the archived logfile into a new temporary
> database on a different server, then use SQL to search it and delete the
> db when finished.
> Thanks for the suggestion though.
> Alvaro Herrera wrote:
>> Ben Webber wrote:
>>> I wrote a shell script to find the duration and the related statement
>>> in the log file and place them one after the other if the duration is
>>> over a specified time like this:-
>>> 2008-10-31 02:00:49 GMT  [mp_live] LOG: statement: CLUSTER;
>>> 2008-10-31 02:04:42 GMT  [mp_live] LOG: duration: 232783.684 ms
>> I wonder if you'd benefit from doing CSV logs and then storing them into
>> a table. Querying using SQL is probably going to be easier (and more
>> robust -- it'd work even with embedded newlines etc).
> This message has been scanned for malware by SurfControl plc.
In response to
pgsql-admin by date
|Next:||From: Tom Lane||Date: 2008-11-17 17:47:05|
|Subject: Re: restore failure |
|Previous:||From: Ben Webber||Date: 2008-11-17 15:24:00|
|Subject: Re: Change in logging behaviour between 8.1 and 8.2|