Re: Should we optimize the `ORDER BY random() LIMIT x` case?

From: Nico Williams <nico(at)cryptonector(dot)com>
To: Vik Fearing <vik(at)postgresfriends(dot)org>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Aleksander Alekseev <aleksander(at)timescale(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Andrei Lepikhov <lepihov(at)gmail(dot)com>, wenhui qiu <qiuwenhuifx(at)gmail(dot)com>
Subject: Re: Should we optimize the `ORDER BY random() LIMIT x` case?
Date: 2025-05-16 21:53:29
Message-ID: aCez2Uz/yx2DTwPv@ubby
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, May 16, 2025 at 11:10:49PM +0200, Vik Fearing wrote:
> Isn't this a job for <fetch first clause>?
>
> Example:
>
> SELECT ...
> FROM ... JOIN ...
> FETCH SAMPLE FIRST 10 ROWS ONLY
>
> Then the nodeLimit could do some sort of reservoir sampling.

The query might return fewer than N rows. What reservoir sampling
requires is this bit of state: the count of input rows so far.

The only way I know of to keep such state in a SQL query is with a
RECURSIVE CTE, but unfortunately that would require unbounded CTE size,
and it would require a way to query next rows one per-iteration.

Nico
--

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Paul A Jungwirth 2025-05-16 21:57:07 Foreign key isolation tests
Previous Message Nico Williams 2025-05-16 21:50:49 Re: Should we optimize the `ORDER BY random() LIMIT x` case?