| From: | Michael Paquier <michael(at)paquier(dot)xyz> |
|---|---|
| To: | Sami Imseih <samimseih(at)gmail(dot)com> |
| Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: [Proposal] Adding callback support for custom statistics kinds |
| Date: | 2025-11-10 23:21:26 |
| Message-ID: | aRJzdl_cSc3XHX85@paquier.xyz |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Mon, Nov 10, 2025 at 01:56:23PM -0600, Sami Imseih wrote:
> I started reworking the patch, but then I realized that I don't like this
> approach of using the same callback to support serializing NameData and
> serializing extra data. In the existing "to_serialized_name" callback
> , NameData is serialized instead of the hash key, meaning that the
> "from_serialized_name" must be called before we create an entry. The
> callback translates the NameData to an objid as is the case with replication
> slots, and the key is then used to create the entry.
Thanks for looking at that.
> However, in the case of serializing extra data, we want to have already
> created the entry by the time we call the callback. For example populating
> non-key fields of an entry with a dsa_pointer after reading some serialized
> data into dsa.
>
> If we do want to support a single callback, we would need extra metadata in
> the Kind registration to let the extension tell us what the callback is used
> for and to either trigger the callback before or after entry creation. I am
> not very thrilled about doing something like this, as I see 2 very different
> use-cases here.
Ah, I see your point. By keeping two callbacks, one to translate a
key to/from a different field (NameData currently, but it could be
something else with a different size), we would for example be able to
keep very simple the checks for duplicated entries when reading the
file. Agreed that it would be good to keep the key lookups as stable
as we can.
So, what you are suggested is a second callback once we have called
read_chunk() and write_chunk() for a PGSTAT_FILE_ENTRY_HASH or a
PGSTAT_FILE_ENTRY_NAME and let a stats kind write in the main file
and/or one or more extra files the data they want? I'd be fine with
that, yes, and that should work with the PGSS case in mind.
--
Michael
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Michael Paquier | 2025-11-10 23:31:26 | Re: pgsql: Drop unnamed portal immediately after execution to completion |
| Previous Message | Sami Imseih | 2025-11-10 23:17:47 | Re: Improve LWLock tranche name visibility across backends |