From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | jd(at)commandprompt(dot)com |
Cc: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>, akp geek <akpgeek(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: index row requires 10040 bytes, maximum size is 8191 |
Date: | 2010-11-13 03:15:01 |
Message-ID: | 8326.1289618101@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Joshua D. Drake" <jd(at)commandprompt(dot)com> writes:
> On Sat, 2010-11-13 at 09:48 +0800, Craig Ringer wrote:
>> Thoughts, folks? Does this matter in practice, since anything you'd want
>> to index will in practice be small enough or a candidate for full-text
>> indexing?
> I have run into this problem maybe 3 times in my whole career, precisely
> because if you are dealing with text that big, you move to full text
> search.
Yeah, the real question here is exactly what do you think a btree index
on a large text column will get you? It seems fairly unlikely that
either simple equality or simple range checks are very useful for such
data. I guess there's some use case for uniqueness checks, which we've
seen people approximate by unique-indexing the md5 hash of the column
value.
BTW, the 8K limit applies after possible in-line compression, so the
actual data value causing the failure was likely considerably longer
than 10K.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Elliot Chance | 2010-11-13 03:43:31 | The first dedicated PostgreSQL forum |
Previous Message | Clark C. Evans | 2010-11-13 03:13:36 | Re: More then 1600 columns? |