B-tree index row size limit

From: Florian Weimer <fw(at)deneb(dot)enyo(dot)de>
To: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: B-tree index row size limit
Date: 2016-10-09 11:39:57
Message-ID: 87bmythhte.fsf@mid.deneb.enyo.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

The index row size limit reared its ugly head again.

My current use of PostgreSQL is to load structured data into it but
from sources I don't have control over, to support a wide range of
queries whose precise nature is not yet known to me. (Is this called
a data warehouse?)

Anyway, what happens from time to time is that some data which has
been processed successfully in the past suddenly failed to load
because there happens to be a very long string in it. I know how to
work around this, but it's still annoying when it happens, and the
workarounds may make it much, much harder to write efficient queries.

What it would it take to eliminate the B-tree index row size limit (or
rather, increase it to several hundred megabytes)? I don't care about
performance for index-based lookups for overlong columns, I just want
to be able to load arbitrary data and index it.

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Corey Huinker 2016-10-09 15:27:42 Re: proposal: psql \setfileref
Previous Message Pavel Stehule 2016-10-09 09:48:53 Re: proposal: psql \setfileref