Skip site navigation (1) Skip section navigation (2)

Re: URGENT! pg_dump doesn't work!

From: "Nigel J(dot) Andrews" <nandrews(at)investsystems(dot)co(dot)uk>
To: Wim <wdh(at)belbone(dot)be>
Cc: pgsql-novice <pgsql-novice(at)postgresql(dot)org>,pgsql-general(at)postgresql(dot)org
Subject: Re: URGENT! pg_dump doesn't work!
Date: 2002-07-22 13:00:47
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-generalpgsql-novice
On Mon, 22 Jul 2002, Wim wrote:

> Nigel J. Andrews wrote:
> >On Mon, 22 Jul 2002, Wim wrote:
> >
> >>Hello guys,
> >>
> >>I have a problem with my postgres 7.2.1 database.
> >>I can't perform a pg_dump one my database...
> >>The message I get back is:
> >>
> >>pg_dump: query to obtain list of tables failed: server closed the 
> >>connection unexpectedly
> >>        This probably means the server terminated abnormally
> >>        before or while processing the request.
> >>pg_dump failed on belbonedb_v2, exiting
> >>
> >>Whe I connect to the database and do:
> >>
> >>belbonedb_v2=# \dt networks
> >>
> >>I get:
> >>
> >>ERROR:  AllocSetFree: cannot find block containing chunk 4aee70
> >>
> >>
> >>Can I fix this error?
> >>
> >
> >
> >Is this perhaps another of those hardware errors that seem to be turning up at
> >the moment?
> >
> >So Wim, did you have improper shutdowns? Are you confident in your memory and
> >hard disk(s)?
> >
> >
> The database is never killed with the -9 and I have no problems with my 
> hard disks or memory...
> Is it a bug that can be fixed? I can create a DB with the same tables 
> and do a 'copy from/to' to transfer the data.
> 'Cause it is a large DB (tables with more that 1 million rows) , I would 
> do this if I have no other option left...

When you say you can copy from the tables you have tried this and succeeded I

Have you checked the server log to see that it is giving the same message as
you see in psql?

What about that value to give in the error message (4aee70), is it always the
same value? Does that look like a reasonable address with in a programs data
space on your system?

Having looked at the code it seems that somewhere something is trying to free a
memory chunk that is bigger than the chunk limit (ALLOC_CHUNK_LIMIT), 8Kb I
believe from the comments, that has either already been freed or has not been
allocated. Therefore it's sounding a little like some pointer is being trashed
somewhere. If you could obtain a stack trace from the backend it might be
useful. Look in the directories in your data directory for core files. You may
need to enable core file dumping with something like ulimit -c unlimited before
starting your server. Alternately, start psql and use gdb to attach to the
backend process serving it and obtain the back trace when it faults.

Nigel J. Andrews

Logictree Systems Limited
Computer Consultants

In response to

pgsql-novice by date

Next:From: Ken CoreyDate: 2002-07-22 14:04:34
Subject: Generating custom statistics rows puzzler...left join on parts ofintervals?
Previous:From: WimDate: 2002-07-22 11:08:15
Subject: Re: URGENT! pg_dump doesn't work!

pgsql-general by date

Next:From: Cserna ZsoltDate: 2002-07-22 13:33:54
Subject: Re: sequence scan, but indexed tables
Previous:From: Devrim GUNDUZDate: 2002-07-22 12:58:42
Subject: Re: Windows - why not?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group