Joshua D. Drake wrote:
> On Sat, 27 Feb 2010 00:43:48 +0000, Greg Stark <gsstark(at)mit(dot)edu> wrote:
>> I want my ability to run large batch queries without any performance
>> or reliability impact on the primary server.
> I can use any number of other technologies for high availability.
Remove "must be an instant-on failover at the same time" from the
requirements and you don't even need 9.0 to handle that, this has been a
straightforward to solve problem since 8.2. It's the combination of HA
and queries that make things hard to do.
If you just want batch queries on another system without being concerned
about HA at the same time, the first option is to just fork the base
backup and WAL segment delivery to another server and run queries there.
Some simple filesystem snapshot techniques will also suffice to handle
it all on the same standby. Stop warm standby recovery, snapshot,
trigger the server, run your batch job; once finished, rollback to the
snapshot, grab the latest segment files, and resume standby catchup.
Even the lame Linux LVM snapshot features can handle that job--one of my
coworkers has the whole thing scripted even this is so common.
And if you have to go live because there's a failover, you're back to
the same "cold standby" situation a large max_standby_delay puts you at,
so it's not even very different from what you're going to get in 9.0 if
this is your priority mix. The new version is just lowering the
operational complexity involved.
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
In response to
pgsql-hackers by date
|Next:||From: Greg Smith||Date: 2010-02-27 04:38:38|
|Subject: Re: Hot Standby query cancellation and Streaming Replication
|Previous:||From: Tom Lane||Date: 2010-02-27 04:02:31|
|Subject: caracara failing to bind to localhost?|