From: | jian he <jian(dot)universality(at)gmail(dot)com> |
---|---|
To: | Vik Fearing <vik(at)postgresfriends(dot)org> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: array_random |
Date: | 2025-07-08 07:48:07 |
Message-ID: | CACJufxHUeB9L3OB7p9L1Cqnv8-nqcpi8yUPWECcds+Lk1J8EpA@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Jul 5, 2025 at 3:32 PM Vik Fearing <vik(at)postgresfriends(dot)org> wrote:
>
> On 30/06/2025 17:04, jian he wrote:
>
> reasons for adding array_random is:
> 1. This is better than array_fill. This can fill random and constant
> values (random, min and max the same).
> 2. Building a multi-dimensional PL/pgSQL function equivalent to
> array_random is not efficient and is also not easier.
>
>
> I am not against this at all, but what is the actual use case?
>
> --
it seems not trivial to wrap up all the generated random values into a specific
multi-dimensional array (more than 2 dimensions).
for example, say we generated 24 random values and wanted to arrange them into a
3-dimensional array with shape [4, 3, 2].
we can easily use:
SELECT array_random(1, 6, array[4,3, 2]);
of course, we can use plpgsql to do it, but the c function would be
more convenient.
does this make sense?
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2025-07-08 08:08:08 | Re: A assert failure when initdb with track_commit_timestamp=on |
Previous Message | Michael Paquier | 2025-07-08 07:40:56 | Re: Support for 8-byte TOAST values (aka the TOAST infinite loop problem) |