| From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
|---|---|
| To: | Colin 't Hart <colinthart(at)gmail(dot)com> |
| Cc: | PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: pgBadger and postgres_fdw |
| Date: | 2026-01-21 16:59:17 |
| Message-ID: | 1013bd5d-5356-497b-ae06-c02dc53caf92@aklaver.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On 1/21/26 08:12, Colin 't Hart wrote:
> 6. The 19 slowest queries in a 4 hour period are between 2 and 37
> minutes, with an average of over 10 minutes; they are all `fetch 100
> from c2`.
>
> The slowness itself isn't my question here; it was caused by having too
> few cores in the new environment, while the application was still
> assuming the higher core count and generating too many concurrent processes.
>
> My question is how to identify which connections / queries from
> postgres_fdw are generating the `fetch 100 from c2` queries, which, in
> turn, may quite possibly lead to a feature request for having these
> named uniquely.
My guess not.
See:
https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/postgres_fdw.c
Starting at line ~5212
fetch_size = 100;
and ending at line ~5234
/* Construct command to fetch rows from remote. */
snprintf(fetch_sql, sizeof(fetch_sql), "FETCH %d FROM c%u",
fetch_size, cursor_number);
So c2 is a cursor number.
>
> Thanks,
>
> Colin
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Adrian Klaver | 2026-01-21 17:20:02 | Re: pgBadger and postgres_fdw |
| Previous Message | Tom Lane | 2026-01-21 16:27:56 | Re: pg_trgm upgrade to 1.6 led to load average increase |