Stampede of the JIT compilers

From: James Coleman <jtc331(at)gmail(dot)com>
To: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Cc: David Pirotte <dpirotte(at)gmail(dot)com>
Subject: Stampede of the JIT compilers
Date: 2023-06-23 14:27:57
Message-ID: CAAaqYe-g-Q0Mm5H9QLcu8cHeMwok+HaxS4-UC9Oj3bK3a5jPvg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hello,

We recently brought online a new database cluster, and in the course
of ramping up traffic to it encountered a situation where a misplanned
query (analyzing helped with this, but I think the issue is still
relevant) resulted in that query being compiled with JIT, and soon a
large number of backends were running that same shape of query, all of
them JIT compiling it. Since each JIT compilation took ~2s, this
starved the server of resources.

There are a couple of issues here. I'm sure it's been discussed
before, and it's not the point of my thread, but I can't help but note
that the default value of jit_above_cost of 100000 seems absurdly low.
On good hardware like we have even well-planned queries with costs
well above that won't be taking as long as JIT compilation does.

But on the topic of the thread: I'd like to know if anyone has ever
considered implemented a GUC/feature like
"max_concurrent_jit_compilations" to cap the number of backends that
may be compiling a query at any given point so that we avoid an
optimization from running amok and consuming all of a servers
resources?

Regards,
James Coleman

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Matthias van de Meent 2023-06-23 14:46:02 Re: Improving btree performance through specializing by key shape, take 2
Previous Message Hayato Kuroda (Fujitsu) 2023-06-23 14:25:13 [PGdocs] fix description for handling pf non-ASCII characters