Skip site navigation (1) Skip section navigation (2)

vacuum analyze fails: "Tuple is too big"

From: Palle Girgensohn <girgen(at)partitur(dot)se>
To: pgsql-bugs(at)hub(dot)org
Subject: vacuum analyze fails: "Tuple is too big"
Date: 1999-10-28 19:56:50
Message-ID: 3818AA82.872FE92B@partitur.se (view raw or flat)
Thread:
Lists: pgsql-bugs
============================================================================
                        POSTGRESQL BUG REPORT TEMPLATE
============================================================================


Your name		: Palle Girgensohn
Your email address	: girgen(at)partitur(dot)se


System Configuration
---------------------
  Architecture (example: Intel Pentium)  	:   Intel Pentium

  Operating System (example: Linux 2.0.26 ELF) 	:   FreeBSD 3.3 Release

  PostgreSQL version (example: PostgreSQL-6.5.2):   PostgreSQL-6.5.2 

  Compiler used (example:  gcc 2.8.0)		:   gcc-2.7.2.3


Please enter a FULL description of your problem:
------------------------------------------------
In getting around the fact that postgres limits the tuple size to 8192
bytes, I use a "bigtext" table that splits large text chunks into
smaller parts and store them in separate tuples:

Table    = bigtext
+------------------------------+-------------------------------+-------+
|              Field           |           Type                | Length|
+------------------------------+-------------------------------+-------+
| id                           | int4                          |     4 |
| seqid                        | int4                          |     4 |
| txt                          | text                          |   var |
+------------------------------+-------------------------------+-------+

When checking, none of the tuples is too big:
foobar=> select id,seqid,length("text") as len from bigtext order by len
desc;
  id|seqid| len
----+-----+----
9590|    0|6929
9593|    0|6899
9567|    0|6894
9567|    1|6892
9595|    0|6754
9592|    1|3003
9590|    1|2939
9593|    1|2693
9595|    1|1667
9567|    2| 293
(10 rows)

but still, vacuum analyze fails:

foobar=> vacuum analyze; 
ERROR:  Tuple is too big: size 13700

normal vacuum is OK:

foobar=> vacuum;
VACUUM

When removing all tuples above size 6500:

foobar=> delete from bigtext where length("text")   > 6500;
DELETE 5
foobar=> vacuum analyze;
VACUUM

So, it works now... I thought the limit was 8192 bytes? seems more like
6500?

How come?

I know this 8192 tuple size limit can be raised when compiling, but this
is considered very risky, right?

Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible: 
----------------------------------------------------------------------
CREATE TABLE "bigtext" (
	"id" int4,
	"seqid" int4,
	"txt" text);

Populate it with some text chunks about 6800 bytes each in txt.

vacuum analyze

pgsql-bugs by date

Next:From: Mario SimeoneDate: 1999-10-31 09:49:15
Subject: subscribe
Previous:From: ednutDate: 1999-10-28 09:20:57
Subject: Re: Linux/Postgres 6.5 problems using jdbc w/jdk1.2

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group