Skip site navigation (1) Skip section navigation (2)

Re: pg_dump in stand alone backend

From: Daniel Martini <dmartini(at)uni-hohenheim(dot)de>
To: Ulrich Wisser <ulrich(dot)wisser(at)relevanttraffic(dot)se>
Cc: Gaetano Mendola <mendola(at)bigfoot(dot)com>,pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump in stand alone backend
Date: 2004-08-24 10:30:55
Message-ID: 1093343455.412b18df3c8bb@webmail.uni-hohenheim.de (view raw or flat)
Thread:
Lists: pgsql-general
Hi, 
 
Citing Ulrich Wisser <ulrich(dot)wisser(at)relevanttraffic(dot)se>: 
> on my database server I have a lot of scripts running. Some of them are  
> not under my control and some of these don't really behave nice. So when  
> my "maintainance period" starts they will still be running for hours and  
>   take exclusive locks on 100,000+ rows in various tables. Which means  
> neither "vacuum full" nor "reindex" can get locks on these tables. Which  
> stalls these calls and delays them to a time when my well behaving but  
> heavy duty scripts start running again. Now these scripts will be  
> delayed be "vacuum" or "reindex" and when I get to my desk the next  
> morning the whole system is in overload. 
>  
> I need to to disconnect any other users and do "vacuum full verbose  
> analyze" "reindex database" and reindex all tables. And for these I will  
> stop the postmaster and run a stand alone backend. 
 
How about stopping the postmaster, running vacuum and reindex in the 
standalone backend, then starting the postmaster with a different 
pg_hba.conf and optionally setting a few options from postgresql.conf 
from the commandline (e.g. MAX_CONNECTIONS, 
SUPERUSER_RESERVED_CONNECTIONS) to limit access to the server to the 
user doing the dump and then doing the dump. When it is finished, you 
could restart the postmaster with the proper runtime environment in 
place. Would this work? 
 
> I figured that doing a nightly backup would be a good idea and running  
> it in stand alone mode will speed up the process drastically. 
 
It seems, we can't try if it is really drastically faster than just  
starting the postmaster with restricted access and then doing the dump. 
Another option would be to look at the source to determine what pg_dump 
actually does and write a sql-script to come as close as possible to that 
and postprocess the output of feeding this to the standalone backend to  
produce again sql which could be loaded by psql. 
Probably a lot of work, but perhaps doable if you only care about the 
actual data and not about triggers, functions, access permissions etc. etc. 
 
Regards, 
Daniel 

In response to

pgsql-general by date

Next:From: Richard HuxtonDate: 2004-08-24 11:07:46
Subject: Re: No connection to a PG 8.0 Beta 1 win32 server
Previous:From: SecrétariatDate: 2004-08-24 10:29:45
Subject: Re: No connection to a PG 8.0 Beta 1 win32 server

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group