FW: More than 1 pg_database Entry for Database - Thread #2

From: Samuel Stearns <SStearns(at)internode(dot)com(dot)au>
To: "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: FW: More than 1 pg_database Entry for Database - Thread #2
Date: 2010-04-16 01:33:11
Message-ID: 68B59BEDCD36854AADBDF17E91B2937AB89DE7F7@EXCHMAIL.staff.internode.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi,

Does anyone have any ideas how I can keep from getting into this duplicate database scenario? Any advice would be greatly appreciated.

Thanks,

Sam

________________________________
From: pgsql-admin-owner(at)postgresql(dot)org [mailto:pgsql-admin-owner(at)postgresql(dot)org] On Behalf Of Samuel Stearns
Sent: Wednesday, 14 April 2010 10:11 AM
To: pgsql-admin(at)postgresql(dot)org
Subject: [ADMIN] More than 1 pg_database Entry for Database - Thread #2

Howdy,

I am running version 8.3.3 and encountered a problem with my pg_dump failing a couple weeks back with the following error:

pg_dump: query returned more than one (2) pg_database entry for database "sqlsnbs"
pg_dumpall: pg_dump failed on database "sqlsnbs", exiting

I followed your thread here:

http://www.mail-archive.com/pgsql-admin(at)postgresql(dot)org/msg25978.html

in an attempt to resolve the problem and in the end, just like was stated in the above thread, I ended up dumping the old Postgres environment into a new initialized one to resolve the problem.

I did not capture the queries I was running at the time of diagnosing but I can summarize. I ran the following query:

select xmin, cmin, xmax, cmax, oid, ctid, datname from pg_database;

which returned the same oid and different xmax for each row of the duplicate database. One xmax was 0 and the other was 3.7 billion. In your thread above it was stated by Tom Lane that the large xmax number may indicate a problem with xid wraparound and that the problem row was never vacuumed away due to broken vacuuming procedures. We were running with auto-vacuum turned on and I verified that it was working by querying pg_stat_all_tables.

I was wondering if you could please answer the following for me to help mitigate this in the future:

1. Should I be running a scheduled vacuum analyze in addition to the auto-vacuum?
2. Should I be running a scheduled vacuum full?
3. Does 8.4 address this problem?

Thank you,

Sam

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Renato Oliveira 2010-04-16 07:00:05 Re: archived WALL files question
Previous Message Kevin Grittner 2010-04-15 22:41:07 Re: Wal Segment files Backup Strategy for archiving