Re: Vacuumdb Fails: Huge Tuple

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: APseudoUtopia <apseudoutopia(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org, Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>, Teodor Sigaev <teodor(at)sigaev(dot)ru>
Subject: Re: Vacuumdb Fails: Huge Tuple
Date: 2009-10-01 21:02:52
Message-ID: 4177.1254430972@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

APseudoUtopia <apseudoutopia(at)gmail(dot)com> writes:
>> Here's what happened:
>>
>> $ vacuumdb --all --full --analyze --no-password
>> vacuumdb: vacuuming database "postgres"
>> vacuumdb: vacuuming database "web_main"
>> vacuumdb: vacuuming of database "web_main" failed: ERROR: huge tuple

> PostgreSQL 8.4.0 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC)
> 4.2.1 20070719 [FreeBSD], 32-bit

This is evidently coming out of ginHeapTupleFastCollect because it's
formed a GIN tuple that is too large (either too long a word, or too
many postings, or both). I'd say that this represents a serious
degradation in usability from pre-8.4 releases: before, you would have
gotten the error upon attempting to insert the table row that triggers
the problem. Now, with the "fast insert" stuff, you don't find out
until VACUUM fails, and you have no idea where the bad data is. Not cool.

Oleg, Teodor, what can we do about this? Can we split an oversize
tuple into multiple entries? Can we apply suitable size checks
before instead of after the fast-insert queue?

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Alvaro Herrera 2009-10-01 23:04:01 Re: Weird behavior with "sensitive" cursors.
Previous Message Alvaro Herrera 2009-10-01 20:59:09 Re: Vacuumdb Fails: Huge Tuple