Re: Support allocating memory for large strings

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: Maxim Zibitsker <max(dot)zibitsker(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Support allocating memory for large strings
Date: 2025-11-10 21:37:10
Message-ID: 1983209.1762810630@sss.pgh.pa.us
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Nathan Bossart <nathandbossart(at)gmail(dot)com> writes:
> FWIW something I am hearing about more often these days, and what I believe
> Maxim's patch is actually after, is the 1GB limit on row size. Even if
> each field doesn't exceed 1GB (which is what artifacts.md seems to
> demonstrate), heap_form_tuple() and friends can fail to construct the whole
> tuple. This doesn't seem to be covered in the existing documentation about
> limits [0].

Yeah. I think our hopes of relaxing the 1GB limit on individual
field values are about zero, but maybe there is some chance of
allowing tuples that are wider than that. The notion that it's
a one-line fix is still ludicrous though :-(

One big problem with a scheme like that is "what happens when
I try to make a bigger-than-1GB tuple into a composite datum?".

Another issue is what happens when a wider-than-1GB tuple needs
to be sent to or from clients. I think there are assumptions
in the wire protocol about message lengths fitting in an int,
for example. Even if the protocol were okay with it, I wouldn't
count on client libraries not to fall over.

On the whole, it's a nasty can of worms, and I stand by the
opinion that the cost-benefit ratio of removing the limit is
pretty awful.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Nathan Bossart 2025-11-10 21:37:38 Re: 2025-11-13 release announcement draft
Previous Message Tom Lane 2025-11-10 21:28:02 Re: pgsql: Drop unnamed portal immediately after execution to completion