Skip site navigation (1) Skip section navigation (2)

Re: Losing data from Postgres

From: Paul Breen <pbreen(at)computerpark(dot)co(dot)uk>
To: "pgsql-admin(at)postgreSQL(dot)org" <pgsql-admin(at)postgreSQL(dot)org>
Subject: Re: Losing data from Postgres
Date: 2000-11-16 17:38:09
Message-ID: Pine.LNX.3.96.1001116173721.6198H-100000@cpark37.computerpark.co.uk (view raw or flat)
Thread:
Lists: pgsql-admin
Bonjour Jean-Marc,

Yeah, we get the feeling that it may be a vacuum+index related problem,
not sure though?  As I said, we've gone back to only vacuuming twice a day
and the problem (we hope) has gone away.  It leaves us feeling uneasy
though, when we fix a problem we like to understand why!

Basically we are going to monitor it for the next few weeks and if there
is no occurrence of the data loss, we will - grudgingly - consider it no
longer a problem.  I'd still like to know what the Postgres backend
messages mean in the log, especially the one about "xid table corrupted"??

Anyway, thanks to everyone for their help & support, it is greatly
appreciated.  If we have any break-throughs on this thorny subject we will
mail the list with our findings - cheers.

Paul M. Breen, Software Engineer - Computer Park Ltd.

Tel:   (01536) 417155
Email: pbreen(at)computerpark(dot)co(dot)uk

On Wed, 15 Nov 2000, Jean-Marc Pigeon wrote:

> Bonjour Paul Breen
> > 
> > Hello everyone,
> > 
> > Can anyone help us?
> > 
> > We are using Postgres in a hotspare configuration, that is, we have 2
> > separate boxes both running identical versions of Postgres and everytime
> > we insert|update|delete from the database we write to both boxes (at the
> > application level).  All communications to the databases are in
> > transaction blocks and if we cannot commit to both databases then we
> > rollback. 
> [...]
> > Originally we were vacuuming twice a day but because some of the reports
> > we produce regularly were taking too long as the database grew, we added
> > multiple indexes onto the key tables and began vacuuming every hour.  It's
> > only after doing this that we noticed the data loss - don't know if this
> > is coincidental or not.  Yesterday we went back to vacuuming only twice a
> > day. 
> 
> 	We found something similar on our application.
> 	Seems to be a vacuum+index problem, the index do
> 	not refer to ALL data after the vacuum!.
> 	
> 	If I am right, drop the index, create the index again
> 	and your data should be found again...
> 
> 	On our side now, before to do vacuum we drop the index
> 	do vacuum, rebuild the index. The overall time is
> 	the same as doing a 'simple' vacuum.
> 
> 	Hoping that help...
> 	
> 
> A bientot
> ==========================================================================
> Jean-Marc Pigeon		      Internet:   Jean-Marc(dot)Pigeon(at)safe(dot)ca
> SAFE Inc.		    	Phone: (514) 493-4280  Fax: (514) 493-1946
>        REGULUS,  a real time accounting/billing package for ISP
>            REGULUS' Home base <"http://www.regulus.safe.ca">
> ==========================================================================
> 



pgsql-admin by date

Next:From: Aleksander Rozman - AndyDate: 2000-11-16 20:54:28
Subject: Tool for filling database
Previous:From: turing2000Date: 2000-11-16 16:24:52
Subject:

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group