Re: pg_column_toast_chunk_id: a function to get a chunk ID of a TOASTed value

From: Nathan Bossart <nathandbossart(at)gmail(dot)com>
To: Yugo NAGATA <nagata(at)sraoss(dot)co(dot)jp>
Cc: jian he <jian(dot)universality(at)gmail(dot)com>, Sergei Kornilov <sk(at)zsrv(dot)org>, Nikita Malakhov <hukutoc(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: pg_column_toast_chunk_id: a function to get a chunk ID of a TOASTed value
Date: 2024-03-08 22:17:58
Message-ID: 20240308221758.GB1060542@nathanxps13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Mar 08, 2024 at 03:31:55PM +0900, Yugo NAGATA wrote:
> On Thu, 7 Mar 2024 16:56:17 -0600
> Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
>> to me. Do you think it's worth adding something like a
>> pg_column_toast_num_chunks() function that returns the number of chunks for
>> the TOASTed value, too?
>
> If we want to know the number of chunks of a specified chunk_id,
> we can get this by the following query.
>
> postgres=# SELECT id, (SELECT count(*) FROM pg_toast.pg_toast_16384 WHERE chunk_id = id)
> FROM (SELECT pg_column_toast_chunk_id(v) AS id FROM t);

Good point. Overall, I think this patch is in decent shape, so I'll aim to
commit it sometime next week.

> +{ oid => '8393', descr => 'chunk ID of on-disk TOASTed value',
> + proname => 'pg_column_toast_chunk_id', provolatile => 's', prorettype => 'oid',
> + proargtypes => 'any', prosrc => 'pg_column_toast_chunk_id' },

Note to self: this change requires a catversion bump.

> +INSERT INTO test_chunk_id(v1,v2)
> + VALUES (repeat('x', 1), repeat('x', 2048));

Is this guaranteed to be TOASTed for all possible page sizes?

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2024-03-08 22:20:19 Re: Make query cancellation keys longer
Previous Message Nathan Bossart 2024-03-08 22:03:22 Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to process in all databases