Re: 100% failover + replication solution

From: "Shoaib Mir" <shoaibmir(at)gmail(dot)com>
To: "Moiz Kothari" <moizpostgres(at)gmail(dot)com>
Cc: "Ben Suffolk" <ben(at)vanilla(dot)net>, pgsql-admin(at)postgresql(dot)org
Subject: Re: 100% failover + replication solution
Date: 2006-10-30 13:33:00
Message-ID: bf54be870610300533l47a115aexd2a47f7c0a6da897@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Moiz,

Have you tried PGPool? as that comes with a built-in load balancer as well.

For PITR in HA scenario, I dont remember where I read but some one did it
like this:

- Make base backup for the primary server say five time a day (depends on
the transactions happening on the db server)
- Automate these base backups in a way that these backup are always made on
the secondary server
- Keep the archiving enabled with your archives been saved directly to a
shared disk
- Now have the recovery.conf placed in the $PGDATA of secondary server
- When you want to switch servers just run the postmaster with recovery file
and that way it will make it come up to date with the primary server
- Once you have switched to seconday now mark it primary and the primary as
seconday and keep on doing the same in a loop

You can automate this easily with a few scripts.

There is a new feature in 8.2 that let you set the archive_timeout so that
after a specific amount of time a new WAL archive is made which makes it
easy for low transaction systems where the WAL archive size is not reached
that easily so it can be copied it to the archive folder.

Hope this helps in your case....

Thank you,
----------
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 10/30/06, Moiz Kothari <moizpostgres(at)gmail(dot)com> wrote:
>
> Shoaib,
>
> I agree that PGCluster might be a better option, i dont want to go with
> Slony because of primary key constraints. But PGCluster is a good option,
> the only concerns are :
>
> 1) It might slow down the process a bit. as confirmation happens after
> transaction gets comitted to all the nodes.
> 2) Its difficult to convince, as it is an external project and if support
> for the same stops or future versions of postgres does not work, it might be
> a problem.
>
> Can you elaborate more the way PITR for HA being used for primary and
> secondary servers, maybe u can light a bulb in me for me to go ahead with
> the approach. I like the idea of using WAL logs because its postgres
> internal and secondly it would be fastest way of keeping databases in sync
> without slowing down other servers.
>
> Awaiting your reply.
>
> Regards,
> Moiz Kothari
>
> On 10/30/06, Shoaib Mir <shoaibmir(at)gmail(dot)com> wrote:
> >
> > Hi Moiz,
> >
> > If I had to choose for your case where you want to direct your selects
> > to slave node and inserts/updates on master, I would have opted for Slony or
> > PGCluster.
> >
> > Using PITR for HA can be a good option if you want to switch between
> > primary and secondary server, where you can store the archive files on a
> > shared disk and place a recovery file with in $PGDATA and automate the
> > process where it can run the process of recovery on each primary and
> > seconday like for example 5 times a day as it all depends on the number of
> > transactions happening on the db server. I have seen a few users doing this
> > for routine VACUUM FULL process as a maintanence activity.
> >
> > Thanks,
> > ---------
> > Shoaib Mir
> > EnterpriseDB (www.enterprisedb.com)
> >
> > On 10/30/06, Moiz Kothari < moizpostgres(at)gmail(dot)com> wrote:
> > >
> > > Shoaib,
> > >
> > > It sure does, i saw PGCLUSTER, but 3 reasons for having a postgres
> > > specific solution.
> > >
> > > 1) If pgcluster stops further development, it would be lot more hassel
> > > when upgrading to a different version of postgres.
> > > 2) Postgres specific solution would help alot going ahead in future.
> > > 3) Also architecture of pgcluster might make things slower as it
> > > updates complete cluster before confirming the request.
> > >
> > > There are lots of them available in market, but i think WAL solution
> > > should be available, if not then the thought process should be there going
> > > ahead. I am expecting a solution out of WAL logs. Let me know if you have
> > > any thoughts about it.
> > >
> > > Regards,
> > > Moiz Kothari
> > >
> > > On 10/30/06, Shoaib Mir < shoaibmir(at)gmail(dot)com> wrote:
> > > >
> > > > There is this project which actually is not released yet, but
> > > > something that you want to achieve :)
> > > >
> > > > http://pgfoundry.org/projects/pgpitrha
> > > >
> > > > Regards,
> > > > -------
> > > > Shoaib Mir
> > > > EnterpriseDB (www.enterprisedb.com)
> > > >
> > > > On 10/30/06, Ben Suffolk < ben(at)vanilla(dot)net> wrote:
> > > >
> > > > > > Guys,
> > > > > >
> > > > > > I have been thinking about this and wanted to see if it can be
> > > > > > achived. I wanted to make a 100% failover solution for my
> > > > > postgres
> > > > > > databases. The first thing that comes to my mind is doing it
> > > > > using
> > > > > > WAL logs. Am attaching the diagram for which i will write more
> > > > > here.
> > > > >
> > > > > While its not the solution you were looking at, have you seen
> > > > > PGCluser :-
> > > > >
> > > > > http://pgcluster.projects.postgresql.org/index.html
> > > > >
> > > > > I have not tried it, but was looking the other week at various
> > > > > fail-
> > > > > over type solutions and came across it. It seems to be able to do
> > > > > what you want.
> > > > >
> > > > > Ben
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ---------------------------(end of
> > > > > broadcast)---------------------------
> > > > > TIP 9: In versions below 8.0, the planner will ignore your desire
> > > > > to
> > > > > choose an index scan if your joining column's datatypes do
> > > > > not
> > > > > match
> > > > >
> > > >
> > > >
> > >
> >
>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Andrew Sullivan 2006-10-30 13:38:06 Re: 100% failover + replication solution
Previous Message Moiz Kothari 2006-10-30 13:11:54 Re: 100% failover + replication solution