Re: query failing with out of memory error message.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Joe Maldonado" <jmaldonado(at)webehosting(dot)biz>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: query failing with out of memory error message.
Date: 2004-06-30 02:50:56
Message-ID: 2844.1088563856@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Joe Maldonado" <jmaldonado(at)webehosting(dot)biz> writes:
> I have a seemingly corrupt row in a table and wanted to look at it's
> contents.
> when I try to query it I get the following...

> db=# select * from some_table offset 411069 limit 1;
> ERROR: invalid memory alloc request size 4294967293

> but when I select individual fields within the record I get data.

That's odd ... I'd certainly expect one or the other field of the table
to show that failure.

> Is there a way to read this row from the datafile to examine it closer?

Select "ctid" from the troublesome row to determine its block and item
number, then dump out that block with pg_filedump. If there is data
corruption it'll usually be possible to see it in the pg_filedump dump.

Another line of attack is to attach to the backend process with gdb and
set a breakpoint at errfinish (or elog if a pre-7.4 backend), and then
get a stack trace back from the error report. This will help narrow
down exactly where the bogus allocation request is coming from.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2004-06-30 03:31:07 Re: dup(0) failed after 3195 successes: Bad file descriptor
Previous Message Martijn van Oosterhout 2004-06-30 01:13:32 FULL JOIN and mergjoinable conditions...