From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Greg Hennessy <greg(dot)hennessy(at)gmail(dot)com> |
Cc: | "Weck, Luis" <luis(dot)weck(at)pismo(dot)io>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: optimizing number of workers |
Date: | 2025-07-14 18:54:42 |
Message-ID: | 710283.1752519282@sss.pgh.pa.us |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Greg Hennessy <greg(dot)hennessy(at)gmail(dot)com> writes:
>> Postgres has chosen to use only a small fraction of the CPU's I have on
>> my machine. Given the query returns an answer in about 8 seconds, it may be
>> that Postgresql has allocated the proper number of works. But if I wanted
>> to try to tweak some config parameters to see if using more workers
>> would give me an answer faster, I don't seem to see any obvious knobs
>> to turn. Are there parameters that I can adjust to see if I can increase
>> throughput? Would adjusting parallel_setup_cost or parallel_tuple_cost
>> likely to be of help?
See the bit about
* Select the number of workers based on the log of the size of
* the relation. This probably needs to be a good deal more
* sophisticated, but we need something here for now. Note that
in compute_parallel_worker(). You can move things at the margins by
changing min_parallel_table_scan_size, but that logarithmic behavior
will constrain the number of workers pretty quickly. You'd have to
change that code to assign a whole bunch of workers to one scan.
(No, I don't know why it's done like that. There might be related
discussion in our archives, but finding it could be difficult.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Benjamin Wang | 2025-07-14 19:02:27 | Re: Bypassing Directory Ownership Check in PostgreSQL 16.6 with Secure z/OS NFS (AT-TLS) |
Previous Message | Tom Lane | 2025-07-14 18:30:45 | Re: Bypassing Directory Ownership Check in PostgreSQL 16.6 with Secure z/OS NFS (AT-TLS) |