| From: | Michael Paquier <michael(at)paquier(dot)xyz> |
|---|---|
| To: | Jacob Champion <jacob(dot)champion(at)enterprisedb(dot)com> |
| Cc: | Peter Eisentraut <peter(at)eisentraut(dot)org>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Daniel Gustafsson <daniel(at)yesql(dot)se>, Dagfinn Ilmari Mannsåker <ilmari(at)ilmari(dot)org>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: Support getrandom() for pg_strong_random() source |
| Date: | 2025-07-30 23:55:48 |
| Message-ID: | aIqxBLt4nYtb16Jf@paquier.xyz |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Wed, Jul 30, 2025 at 02:03:53PM -0700, Jacob Champion wrote:
> On Wed, Jul 30, 2025 at 12:58 PM Peter Eisentraut <peter(at)eisentraut(dot)org> wrote:
> > I imagine a "get entropy" operation could be very slow or even blocking,
> > whereas a random number generator might just have to do some arithmetic
> > starting from the previous seed state.
>
> Agreed -- it could absolutely be slower, but if it's not slower in
> practice in a user's environment, is there a problem with using it as
> the basis for pg_strong_random()? That doesn't seem "wrong" to me; it
> just seems like a tradeoff that would take investigation.
Yeah, we need to be careful here. Having a blocking or less efficient
operation would be bad for the UUID generation, especially in
INSERT-only workloads and there are a lot of such things these days
that also want to maintain some uniqueness of the data gathered across
multiple nodes. I'm questioning whether the UUID generation could
become a bottleneck if we are not careful, showing high in profiles.
--
Michael
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Michael Paquier | 2025-07-31 00:09:14 | Re: track generic and custom plans in pg_stat_statements |
| Previous Message | Jacob Champion | 2025-07-30 21:03:53 | Re: Support getrandom() for pg_strong_random() source |