Re: B-tree index row size limit

From: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
To: Florian Weimer <fw(at)deneb(dot)enyo(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: B-tree index row size limit
Date: 2016-10-10 08:28:54
Message-ID: be500ee8-5056-5d80-3daa-da42ea718bed@iki.fi
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 10/09/2016 02:39 PM, Florian Weimer wrote:
> What it would it take to eliminate the B-tree index row size limit (or
> rather, increase it to several hundred megabytes)? I don't care about
> performance for index-based lookups for overlong columns, I just want
> to be able to load arbitrary data and index it.

A few ideas:

* Add support for "truncate" B-tree support functions. Long values
wouldn't be stored, but they would be cut at a suitable length. This
would complicate things when you have two values that only differ in the
truncated-away portion. You'd need to still be able to order them
correctly in the index, perhaps by fetching the actual value from the heap.

* Use TOAST for index datums. That would involve adding a whole new
toast table for the index, with the index for the toast table.

* Have something like TOAST, implemented within the B-tree AM. When a
large datum is stored, chop it into chunks that are stored in special
"toast" pages in the index.

* Add smarts to the planner, to support using an expression index even
if the predicate doesn't contain the expression verbatim. For example,
if you have an index on SUBSTR(column, 100), and a predicate "column =
'foo'", you could use the index, if the planner just knew enough about
SUBSTR to realize that.

* Don't do it. Use a hash index instead. If all goes well, it'll be
WAL-logged in PostgreSQL 10.

- Heikki

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2016-10-10 09:23:53 Re: Un-include access/heapam.h
Previous Message Heikki Linnakangas 2016-10-10 08:17:34 Re: Logical tape pause/resume