Re: Performance advice

From: "Michael Mattox" <michael(dot)mattox(at)verideon(dot)com>
To: "Richard Huxton" <dev(at)archonet(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance advice
Date: 2003-06-24 12:16:09
Message-ID: CJEBLDCHAADCLAGIGCOOCEJICKAA.michael.mattox@verideon.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> Don't log your monitoring info directly into the database, log
> straight to one
> or more text-files and sync them every few seconds. Rotate the
> files once a
> minute (or whatever seems suitable). Then have a separate process
> that reads
> "old" files and processes them into the database.
>
> The big advantage - you can take the database down for a short
> period and the
> monitoring goes on. Useful for those small maintenance tasks.

This is a good idea but it'd take a bit of redesign to make it work. here's
my algorithm now:

- Every 10 seconds I get a list of monitors who have nextdate >= current
time
- I put the id numbers of the monitors into a queue
- A thread from a thread pool (32 active threads) retrieves the monitor from
the database from its id, updates the nextdate timestamp, executes the
monitor, and stores the status in the database

So I have two transactions, one to update the monitor's nextdate and another
to update its status. Now that I wrote that I see a possibility to
steamline the last step. I can wait until I update the status to update the
nextdate. That would cut the number of transactions in two. Only problem
is I have to be sure not to add a monitor to the queue when it's currently
executing. This shouldn't be hard, I have a hashtable containing all the
active monitors.

Thanks for the suggestion, I'm definitely going to give this some more
thought.

Michael

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Achilleus Mantzios 2003-06-24 14:10:48 Re: Performance advice
Previous Message Richard Huxton 2003-06-24 11:33:42 Re: Performance advice