BUG #19078: Segfaults in tts_minimal_store_tuple() following pg_upgrade

From: PG Bug reporting form <noreply(at)postgresql(dot)org>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Cc: yuri(at)yrz(dot)am
Subject: BUG #19078: Segfaults in tts_minimal_store_tuple() following pg_upgrade
Date: 2025-10-09 08:35:24
Message-ID: 19078-dfd62f840a2c0766@postgresql.org
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

The following bug has been logged on the website:

Bug reference: 19078
Logged by: Yuri Zamyatin
Email address: yuri(at)yrz(dot)am
PostgreSQL version: 18.0
Operating system: Debian 13.1
Description:

Hello. We are encountering segfaults from tts_minimal_store_tuple() after
upgrade. You may find the stack trace at the end of this message.

Postgresql: PostgreSQL 18.0 (Debian 18.0-1.pgdg13+3) on
x86_64-pc-linux-gnu, compiled by gcc (Debian 14.2.0-19) 14.2.0, 64-bit
Kernel: Linux 6.12.48+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.48-1
(2025-09-20) x86_64 GNU/Linux
OS: Debian 13.1 from deb.debian.org trixie, trixie-updates, trixie-security
(latest)

PostgreSQL client backend crashes with segfault (signal 11) intermittently
when executing SELECT or UPDATE query with the following circumstances:

- There is a set of queries that run into segfault. Noticeably they all do a
lookup on partitioned table with pruning (100+ partitions).
- Occurs across multiple machines (same OS and Postgres version) that handle
many connections and went through pg_upgrade.
- Interval between segfaults varies from dozens of minutes to days depending
on the size/load/configuration of the cluster.
- Happens randomly, most of the times these queries finish successfully, so
we're unable to reproduce the error in a consistent manner.
- Some of the problematic queries run based on the fixed schedule, which
means each run is more likely to fail in larger clusters.

The issue appeared after migration from pg17 (latest in pgdg) to pg18 pgdg
via pg_upgradecluster --method link.
Shortly before that, OS was upgraded from Debian 12 to Debian 13 with the
corresponding change of pgdg apt sources.
Postgresql 17 cluster was shut down during this time.
Right after cluster upgrade we updated all extensions, ran vacuumdb
--analyze-in-stages and reindexed all text-based indexes as expected.

Segmentation faults appeared after 1-5 days.

Trying to find a workaround, we did the following:

- Disabled huge pages
- Reduced checkpoint_timeout from 60min to 5min, reduced wal_max_size
- Disabled jit
- Set io_method to sync (io_uring was much slower under our workload)
- Ran REINDEX SYSTEM in each database
- Reindexed all databases
- Ran pg_repack on tables (with their children) mentioned in the problematic
queries
- Ran pg_amcheck on each database with default parameters, no corruption was
found
- Disabled enable_hashagg for some queries (just now)

Segmentation faults still happen on the same tables but less frequently.
For the cluster with 100+ concurrent connections, 225G shared_buffers, 2000
max_connections, 256cpu,
number of crashes decreased from 30 to 8 times a day.

Interval between segfaults may be related to checkpoint_timeout.
Previously, that server used to crash every 60 minutes, and now there are
series of crashes with 5-10 min gap in them.
We could not reproduce the crash invoking CHECKPOINT manually.

Below is the query with the simplest plan out of those which crash database
(if i relaunch that query).
Although segfaults from it happen rarely and we don't have a coredump for
that yet.

> Update on tcv_scenes cs (cost=1760.81..404518.63 rows=343 width=36)
(actual time=2358.546..2386.729 rows=1.00 loops=1)
> Buffers: shared hit=19823 read=138644 dirtied=20
> -> Nested Loop (cost=1760.81..404518.63 rows=343 width=36) (actual
time=2344.746..2372.927 rows=1.00 loops=1)
> Buffers: shared hit=12241 read=138641 dirtied=1
> -> Bitmap Heap Scan on tcv_scenes cs (cost=1760.39..209679.79
rows=346 width=38) (actual time=2344.280..2372.423 rows=1.00 loops=1)
> Recheck Cond: ((state_id = 7) OR (state_id = 3))
> Filter: (((state_id = 7) AND (date_cr < (now() -
'24:00:00'::interval)) AND (date_state_mo > (now() - '00:15:00'::interval)))
OR ((state_id = 3) AND (date_state_mo < (now() - '00:05:00'::interval))))
> Rows Removed by Filter: 221134
> Heap Blocks: exact=150638
> Buffers: shared hit=12237 read=138641 dirtied=1
> -> BitmapOr (cost=1760.39..1760.39 rows=210544 width=0)
(actual time=43.601..43.603 rows=0.00 loops=1)
> Buffers: shared hit=218 read=22
> -> Bitmap Index Scan on icv_scenes__state
(cost=0.00..1755.70 rows=210151 width=0) (actual time=34.418..34.419
rows=221112.00 loops=1)
> Index Cond: (state_id = 7)
> Index Searches: 1
> Buffers: shared hit=194
> -> Bitmap Index Scan on icv_scenes__state
(cost=0.00..4.51 rows=393 width=0) (actual time=9.181..9.181 rows=30759.00
loops=1)
> Index Cond: (state_id = 3)
> Index Searches: 1
> Buffers: shared hit=24 read=22
> -> Append (cost=0.42..560.73 rows=239 width=658) (actual
time=0.094..0.128 rows=1.00 loops=1)
> Buffers: shared hit=4
> -> Index Scan using tcv_scene_datas_0_pkey on
tcv_scene_datas_0 cd_1 (cost=0.42..2.32 rows=1 width=50) (never executed)
> Index Cond: (cv_scene_id = cs.id)
> Filter: (((cs.state_id = 7) AND (cs.date_cr < (now()
- '24:00:00'::interval)) AND (cs.date_state_mo > (now() -
'00:15:00'::interval)) AND ((stitcher_result)::text ~~ '%download%'::text))
OR ((cs.state_id = 3) AND (cs.date_state_mo < (now() -
'00:05:00'::interval))))
> Index Searches: 0
> ...<100+ partitions>...
> -> Index Scan using tcv_scene_datas_118500000_pkey on
tcv_scene_datas_118500000 cd_238 (cost=0.42..2.33 rows=1 width=1072)
(actual time=0.079..0.080 rows=1.00 loops=1)
> Index Cond: (cv_scene_id = cs.id)
> Filter: (((cs.state_id = 7) AND (cs.date_cr < (now()
- '24:00:00'::interval)) AND (cs.date_state_mo > (now() -
'00:15:00'::interval)) AND ((stitcher_result)::text ~~ '%download%'::text))
OR ((cs.state_id = 3) AND (cs.date_state_mo < (now() -
'00:05:00'::interval))))
> Index Searches: 1
> Buffers: shared hit=4
> -> Seq Scan on tcv_scene_datas_119000000 cd_239
(cost=0.00..0.00 rows=1 width=50) (never executed)
> Filter: ((cv_scene_id = cs.id) AND (((cs.state_id =
7) AND (cs.date_cr < (now() - '24:00:00'::interval)) AND (cs.date_state_mo >
(now() - '00:15:00'::interval)) AND ((stitcher_result)::text ~~
'%download%'::text)) OR ((cs.state_id = 3) AND (cs.date_state_mo < (now() -
'00:05:00'::interval)))))
> Planning:
> Buffers: shared hit=15775
> Planning Time: 57.800 ms
> Trigger for constraint tcv_scenes_new_state_id_fkey: time=0.965 calls=1
> Execution Time: 2395.941 ms
> (982 rows)

More frequently segfaults occur from queries with complex plans (many levels
of aggregation, subqueries and window functions).
Below is an example. We could not find simple reproduction for that.

Overriden in postgresql.conf for that cluster:

> postgresql_effective_cache_size = 560GB
> postgresql_shared_buffers = 225GB
> temp_buffers = 128MB
> work_mem = 2GB
> maintenance_work_mem = 512MB
> vacuum_buffer_usage_limit = 128MB
> max_connections = 2000
> max_parallel_workers_per_gather = 8
> max_parallel_workers = 16
> max_parallel_maintenance_workers = 8
> max_locks_per_transaction = 128
> huge_pages = off
> io_method = sync
> file_copy_method = clone
> effective_io_concurrency = 512
> random_page_cost = 1.0
> temp_file_limit = 100GB
> wal_level = minimal
> max_wal_senders = 0
> wal_buffers = 128MB
> default_statistics_target = 1000
> checkpoint_timeout = 5min
> min_wal_size = 3GB"
> max_wal_size = 3GB"

Server log:

> 2025-10-08 10:36:24 UTC LOG: 00000: client backend (PID 2380761) was
terminated by signal 11: Segmentation fault
> 2025-10-08 10:36:24 UTC DETAIL: Failed process was running: <query>
> 2025-10-08 10:36:24 UTC LOCATION: LogChildExit, postmaster.c:2853
> 2025-10-08 10:36:24 UTC LOG: 00000: terminating any other active
server processes

Dmesg:

> [126364.743906] postgres[2380761]: segfault at 1b ip 0000555fe855f1c1 sp
00007ffe304155a0 error 4 in postgres[3531c1,555fe82f0000+5f3000] likely on
CPU 122 (core 58, socket 1)
> [126364.743931] Code: c9 31 d2 4c 89 63 48 66 89 4b 34 89 c1 49 83 ec 08
66 89 43 04 83 c9 04 66 89 53 06 c7 43 30 ff ff ff ff c7 43 68 00 00 00 00
<41> 8b 74 24 08 45 84 ed 0f 45 c1 4c 89 63 60 8d 56 08 66 89 43 04

Core:

> #0 tts_minimal_store_tuple (slot=0x55601c765bb0, mtup=0x1b,
shouldFree=false) at ./build/../src/backend/executor/execTuples.c:697
> mslot = 0x55601c765bb0
> mslot = <optimized out>
> #1 ExecStoreMinimalTuple (mtup=0x1b, slot=slot(at)entry=0x55601c765bb0,
shouldFree=shouldFree(at)entry=false) at
./build/../src/backend/executor/execTuples.c:1648
> __func__ = "ExecStoreMinimalTuple"
> __errno_location = <optimized out>
> #2 0x0000555fe8566ec2 in agg_retrieve_hash_table_in_memory
(aggstate=aggstate(at)entry=0x55601c7567d0) at
./build/../src/include/executor/executor.h:176
> hashslot = 0x55601c765bb0
> hashtable = 0x55601c182ac8
> i = <optimized out>
> econtext = 0x55601c756f00
> peragg = 0x55601c765198
> pergroup = <optimized out>
> entry = 0x55601c182e48
> firstSlot = 0x55601c763e48
> result = <optimized out>
> perhash = 0x55601c764e50
> #3 0x0000555fe8567ac8 in agg_retrieve_hash_table (aggstate=<optimized
out>) at ./build/../src/backend/executor/nodeAgg.c:2841
> result = 0x0
> result = <optimized out>
> #4 ExecAgg (pstate=0x55601c7567d0) at
./build/../src/backend/executor/nodeAgg.c:2261
> node = 0x55601c7567d0
> result = 0x0
> #5 0x0000555fe858c959 in ExecProcNode (node=0x55601c7567d0) at
./build/../src/include/executor/executor.h:315
> No locals.
> #6 spool_tuples (winstate=winstate(at)entry=0x55601c7561b8,
pos=pos(at)entry=57) at ./build/../src/backend/executor/nodeWindowAgg.c:1326
> node = 0x55601eb9add8
> outerPlan = 0x55601c7567d0
> outerslot = <optimized out>
> oldcontext = 0x55601b1a5fb0
> #7 0x0000555fe858cb20 in window_gettupleslot
(winobj=winobj(at)entry=0x55601c76b028, pos=57, slot=slot(at)entry=0x55601c766a20)
> at ./build/../src/backend/executor/nodeWindowAgg.c:3145
> winstate = 0x55601c7561b8
> oldcontext = <optimized out>
> __func__ = "window_gettupleslot"
> #8 0x0000555fe858ec94 in eval_windowaggregates (winstate=0x55601c7561b8)
at ./build/../src/backend/executor/nodeWindowAgg.c:936
> ret = <optimized out>
> aggregatedupto_nonrestarted = 0
> econtext = 0x55601c7566c8
> agg_row_slot = <optimized out>
> peraggstate = <optimized out>
> numaggs = <optimized out>
> wfuncno = <optimized out>
> numaggs_restart = <optimized out>
> i = <optimized out>
> oldContext = <optimized out>
> agg_winobj = 0x55601c76b028
> temp_slot = 0x55601c766b28
> peraggstate = <optimized out>
> wfuncno = <optimized out>
> numaggs = <optimized out>
> numaggs_restart = <optimized out>
> i = <optimized out>
> aggregatedupto_nonrestarted = <optimized out>
> oldContext = <optimized out>
> econtext = <optimized out>
> agg_winobj = <optimized out>
> agg_row_slot = <optimized out>
> temp_slot = <optimized out>
> __func__ = "eval_windowaggregates"
> next_tuple = <optimized out>
> __errno_location = <optimized out>
> __errno_location = <optimized out>
> ok = <optimized out>
> ret = <optimized out>
> result = <optimized out>
> isnull = <optimized out>
> #9 ExecWindowAgg (pstate=0x55601c7561b8) at
./build/../src/backend/executor/nodeWindowAgg.c:2300
> winstate = 0x55601c7561b8
> slot = <optimized out>
> econtext = <optimized out>
> i = <optimized out>
> numfuncs = <optimized out>
> __func__ = "ExecWindowAgg"
> #10 0x0000555fe856476c in ExecProcNode (node=0x55601c7561b8) at
./build/../src/include/executor/executor.h:315
> No locals.
> #11 fetch_input_tuple (aggstate=aggstate(at)entry=0x55601c755a88) at
./build/../src/backend/executor/nodeAgg.c:563
> slot = <optimized out>
> #12 0x0000555fe8567ca9 in agg_retrieve_direct (aggstate=0x55601c755a88) at
./build/../src/backend/executor/nodeAgg.c:2450
> econtext = 0x55601c7560b0
> firstSlot = 0x55601c76b070
> numGroupingSets = 1
> node = 0x55601eb98630
> tmpcontext = <optimized out>
> peragg = 0x55601c76c218
> outerslot = <optimized out>
> nextSetSize = <optimized out>
> pergroups = 0x55601c76d628
> result = <optimized out>
> hasGroupingSets = false
> currentSet = <optimized out>
> numReset = 1
> i = <optimized out>
> node = <optimized out>
> econtext = <optimized out>
> tmpcontext = <optimized out>
> peragg = <optimized out>
> pergroups = <optimized out>
> outerslot = <optimized out>
> firstSlot = <optimized out>
> result = <optimized out>
> hasGroupingSets = <optimized out>
> numGroupingSets = <optimized out>
> currentSet = <optimized out>
> nextSetSize = <optimized out>
> numReset = <optimized out>
> i = <optimized out>
> #13 ExecAgg (pstate=0x55601c755a88) at
./build/../src/backend/executor/nodeAgg.c:2265
> node = 0x55601c755a88
> result = 0x0
> #14 0x0000555fe85877aa in ExecProcNode (node=<optimized out>) at
./build/../src/include/executor/executor.h:315
> No locals.
> #15 ExecScanSubPlan (node=0x55601f0e9610, econtext=0x55601ef91550,
isNull=0x55601ef8f395) at ./build/../src/backend/executor/nodeSubplan.c:275
> subplan = <optimized out>
> oldcontext = 0x55601f0e9610
> slot = <optimized out>
> astate = 0x0
> planstate = <optimized out>
> subLinkType = EXPR_SUBLINK
> result = 0
> found = false
> l = <optimized out>
> subplan = <optimized out>
> planstate = <optimized out>
> subLinkType = <optimized out>
> oldcontext = <optimized out>
> slot = <optimized out>
> result = <optimized out>
> found = <optimized out>
> l = <optimized out>
> astate = <optimized out>
> __func__ = "ExecScanSubPlan"
> l__state = <optimized out>
> paramid = <optimized out>
> tdesc = <error reading variable tdesc (Cannot access memory at
address 0x0)>
> rowresult = <optimized out>
> rownull = <optimized out>
> col = <optimized out>
> plst = <optimized out>
> __errno_location = <optimized out>
> __errno_location = <optimized out>
> plst__state = <optimized out>
> paramid = <optimized out>
> prmdata = <optimized out>
> dvalue = <optimized out>
> disnull = <optimized out>
> __errno_location = <optimized out>
> plst__state = <optimized out>
> paramid = <optimized out>
> prmdata = <optimized out>
> l__state = <optimized out>
> paramid = <optimized out>
> prmdata = <optimized out>
> #16 ExecSubPlan (node=node(at)entry=0x55601ef91550,
econtext=econtext(at)entry=0x55601ef58798, isNull=0x55601ef8f395) at
./build/../src/backend/executor/nodeSubplan.c:89
> subplan = <optimized out>
> estate = 0x55601b1a60a8
> dir = ForwardScanDirection
> retval = <optimized out>
> __func__ = "ExecSubPlan"
> #17 0x0000555fe854d169 in ExecEvalSubPlan (state=<optimized out>,
op=<optimized out>, econtext=0x55601ef58798) at
./build/../src/backend/executor/execExprInterp.c:5316
> sstate = 0x55601ef91550
> sstate = <optimized out>
> #18 ExecInterpExpr (state=0x55601ef8f390, econtext=0x55601ef58798,
isnull=<optimized out>) at
./build/../src/backend/executor/execExprInterp.c:2001
> op = <optimized out>
> resultslot = 0x55601ef8f180
> innerslot = <optimized out>
> outerslot = <optimized out>
> scanslot = <optimized out>
> oldslot = <optimized out>
> newslot = <optimized out>
> dispatch_table = {0x555fe854d9ce <ExecInterpExpr+4366>,
0x555fe854d9a3 <ExecInterpExpr+4323>, 0x555fe854d986 <ExecInterpExpr+4294>,
> 0x555fe854d969 <ExecInterpExpr+4265>, 0x555fe854d94c
<ExecInterpExpr+4236>, 0x555fe854d92f <ExecInterpExpr+4207>, 0x555fe854d90f
<ExecInterpExpr+4175>,
> 0x555fe854d8e0 <ExecInterpExpr+4128>, 0x555fe854d8b1
<ExecInterpExpr+4081>, 0x555fe854d882 <ExecInterpExpr+4034>, 0x555fe854d853
<ExecInterpExpr+3987>,
> 0x555fe854d821 <ExecInterpExpr+3937>, 0x555fe854d805
<ExecInterpExpr+3909>, 0x555fe854d7e9 <ExecInterpExpr+3881>, 0x555fe854d7cd
<ExecInterpExpr+3853>,
> 0x555fe854db2b <ExecInterpExpr+4715>, 0x555fe854db0c
<ExecInterpExpr+4684>, 0x555fe854daf4 <ExecInterpExpr+4660>, 0x555fe854dabf
<ExecInterpExpr+4607>,
> 0x555fe854da8a <ExecInterpExpr+4554>, 0x555fe854da55
<ExecInterpExpr+4501>, 0x555fe854da20 <ExecInterpExpr+4448>, 0x555fe854d9e8
<ExecInterpExpr+4392>,
> 0x555fe854dbca <ExecInterpExpr+4874>, 0x555fe854db91
<ExecInterpExpr+4817>, 0x555fe854db72 <ExecInterpExpr+4786>, 0x555fe854db47
<ExecInterpExpr+4743>,
> 0x555fe854d749 <ExecInterpExpr+3721>, 0x555fe854d729
<ExecInterpExpr+3689>, 0x555fe854d702 <ExecInterpExpr+3650>, 0x555fe854d6b0
<ExecInterpExpr+3568>,
> 0x555fe854de15 <ExecInterpExpr+5461>, 0x555fe854c985
<ExecInterpExpr+197>, 0x555fe854c990 <ExecInterpExpr+208>, 0x555fe854dddb
<ExecInterpExpr+5403>,
> 0x555fe854c94c <ExecInterpExpr+140>, 0x555fe854c957
<ExecInterpExpr+151>, 0x555fe854ddac <ExecInterpExpr+5356>, 0x555fe854dd92
<ExecInterpExpr+5330>,
> 0x555fe854dd5a <ExecInterpExpr+5274>, 0x555fe854dd47
<ExecInterpExpr+5255>, 0x555fe854de96 <ExecInterpExpr+5590>, 0x555fe854de76
<ExecInterpExpr+5558>,
> 0x555fe854de4c <ExecInterpExpr+5516>, 0x555fe854de2d
<ExecInterpExpr+5485>, 0x555fe854decd <ExecInterpExpr+5645>, 0x555fe854deb6
<ExecInterpExpr+5622>,
> 0x555fe854d7b9 <ExecInterpExpr+3833>, 0x555fe854d790
<ExecInterpExpr+3792>, 0x555fe854dcff <ExecInterpExpr+5183>, 0x555fe854dcd6
<ExecInterpExpr+5142>,
> 0x555fe854dcad <ExecInterpExpr+5101>, 0x555fe854dc71
<ExecInterpExpr+5041>, 0x555fe854dc59 <ExecInterpExpr+5017>, 0x555fe854dc43
<ExecInterpExpr+4995>,
> 0x555fe854dc17 <ExecInterpExpr+4951>, 0x555fe854dbf2
<ExecInterpExpr+4914>, 0x555fe854df3b <ExecInterpExpr+5755>, 0x555fe854dd28
<ExecInterpExpr+5224>,
> 0x555fe854def2 <ExecInterpExpr+5682>, 0x555fe854d69b
<ExecInterpExpr+3547>, 0x555fe854d663 <ExecInterpExpr+3491>, 0x555fe854d62b
<ExecInterpExpr+3435>,
> 0x555fe854d5a0 <ExecInterpExpr+3296>, 0x555fe854d58b
<ExecInterpExpr+3275>, 0x555fe833c321 <ExecInterpExpr.cold>, 0x555fe854d47f
<ExecInterpExpr+3007>,
> 0x555fe854d44b <ExecInterpExpr+2955>, 0x555fe854d436
<ExecInterpExpr+2934>, 0x555fe854d494 <ExecInterpExpr+3028>, 0x555fe854d404
<ExecInterpExpr+2884>,
> 0x555fe854d3cd <ExecInterpExpr+2829>, 0x555fe854d382
<ExecInterpExpr+2754>, 0x555fe854d355 <ExecInterpExpr+2709>, 0x555fe854d33d
<ExecInterpExpr+2685>,
> 0x555fe854d36a <ExecInterpExpr+2730>, 0x555fe854d325
<ExecInterpExpr+2661>, 0x555fe854d307 <ExecInterpExpr+2631>, 0x555fe854d2fe
<ExecInterpExpr+2622>,
> 0x555fe854c932 <ExecInterpExpr+114>, 0x555fe854c936
<ExecInterpExpr+118>, 0x555fe854d4f1 <ExecInterpExpr+3121>, 0x555fe854d4d1
<ExecInterpExpr+3089>,
> 0x555fe854d558 <ExecInterpExpr+3224>, 0x555fe854d543
<ExecInterpExpr+3203>, 0x555fe854d56f <ExecInterpExpr+3247>, 0x555fe854d2bf
<ExecInterpExpr+2559>,
> 0x555fe854d28c <ExecInterpExpr+2508>, 0x555fe854d257
<ExecInterpExpr+2455>, 0x555fe854d224 <ExecInterpExpr+2404>, 0x555fe854d2e6
<ExecInterpExpr+2598>,
> 0x555fe854d52e <ExecInterpExpr+3182>, 0x555fe854d516
<ExecInterpExpr+3158>, 0x555fe854d20f <ExecInterpExpr+2383>, 0x555fe854d1f7
<ExecInterpExpr+2359>,
> 0x555fe854d1e2 <ExecInterpExpr+2338>, 0x555fe854d1bf
<ExecInterpExpr+2303>, 0x555fe854d1a7 <ExecInterpExpr+2279>, 0x555fe854d125
<ExecInterpExpr+2149>,
> 0x555fe854d0fa <ExecInterpExpr+2106>, 0x555fe854d0e5
<ExecInterpExpr+2085>, 0x555fe854d0b2 <ExecInterpExpr+2034>, 0x555fe854d16e
<ExecInterpExpr+2222>,
> 0x555fe854d13a <ExecInterpExpr+2170>, 0x555fe854d186
<ExecInterpExpr+2246>, 0x555fe854c9c0 <ExecInterpExpr+256>, 0x555fe854d072
<ExecInterpExpr+1970>,
> 0x555fe854d051 <ExecInterpExpr+1937>, 0x555fe854d013
<ExecInterpExpr+1875>, 0x555fe854cfee <ExecInterpExpr+1838>, 0x555fe854cf2a
<ExecInterpExpr+1642>,
> 0x555fe854ce6f <ExecInterpExpr+1455>, 0x555fe854cdbe
<ExecInterpExpr+1278>, 0x555fe854ccb9 <ExecInterpExpr+1017>, 0x555fe854cbbf
<ExecInterpExpr+767>,
> 0x555fe854cab8 <ExecInterpExpr+504>, 0x555fe854ca98
<ExecInterpExpr+472>, 0x555fe854ca78 <ExecInterpExpr+440>, 0x555fe854ca48
<ExecInterpExpr+392>,
> 0x555fe854cba7 <ExecInterpExpr+743>, 0x555fe833c330
<ExecInterpExpr-2164112>}
> #19 0x0000555fe85664cf in ExecEvalExprNoReturn (state=0x55601ef8f390,
econtext=0x55601ef58798) at ./build/../src/include/executor/executor.h:419
> retDatum = <optimized out>
> retDatum = <optimized out>
> #20 ExecEvalExprNoReturnSwitchContext (state=0x55601ef8f390,
econtext=0x55601ef58798) at ./build/../src/include/executor/executor.h:460
> oldContext = 0x55601b1a5fb0
> oldContext = <optimized out>
> #21 ExecProject (projInfo=0x55601ef8f388) at
./build/../src/include/executor/executor.h:492
> econtext = 0x55601ef58798
> state = 0x55601ef8f390
> slot = 0x55601ef8f180
> #22 project_aggregates (aggstate=<optimized out>) at
./build/../src/backend/executor/nodeAgg.c:1383
> econtext = <optimized out>
> #23 project_aggregates (aggstate=<optimized out>) at
./build/../src/backend/executor/nodeAgg.c:1370
> econtext = <optimized out>
> #24 0x0000555fe8567a79 in agg_retrieve_direct (aggstate=0x55601ef556c8) at
./build/../src/backend/executor/nodeAgg.c:2613
> econtext = 0x55601ef58798
> firstSlot = 0x55601ef8ef78
> numGroupingSets = 1
> node = <optimized out>
> tmpcontext = <optimized out>
> peragg = 0x55601ef8f8e0
> outerslot = <optimized out>
> nextSetSize = <optimized out>
> pergroups = 0x55601ef8b9a0
> result = <optimized out>
> hasGroupingSets = false
> currentSet = <optimized out>
> numReset = <optimized out>
> i = <optimized out>
> node = <optimized out>
> econtext = <optimized out>
> tmpcontext = <optimized out>
> peragg = <optimized out>
> pergroups = <optimized out>
> outerslot = <optimized out>
> firstSlot = <optimized out>
> result = <optimized out>
> hasGroupingSets = <optimized out>
> numGroupingSets = <optimized out>
> currentSet = <optimized out>
> nextSetSize = <optimized out>
> numReset = <optimized out>
> i = <optimized out>
> #25 ExecAgg (pstate=0x55601ef556c8) at
./build/../src/backend/executor/nodeAgg.c:2265
> node = 0x55601ef556c8
> result = 0x0
> #26 0x0000555fe855c23d in ExecScanFetch (node=<optimized out>,
epqstate=<optimized out>, accessMtd=<optimized out>, recheckMtd=<optimized
out>)
> at ./build/../src/include/executor/execScan.h:126
> No locals.
> #27 ExecScanExtended (node=<optimized out>, accessMtd=0x555fe8588d50
<SubqueryNext>, recheckMtd=0x555fe8588d20 <SubqueryRecheck>, epqstate=0x0,
qual=0x0,
> projInfo=0x55601ef9d680) at
./build/../src/include/executor/execScan.h:187
> slot = <optimized out>
> econtext = 0x55601ef58470
> econtext = <optimized out>
> slot = <optimized out>
> #28 ExecScan (node=0x55601ef58368, accessMtd=0x555fe8588d50
<SubqueryNext>, recheckMtd=0x555fe8588d20 <SubqueryRecheck>)
> at ./build/../src/backend/executor/execScan.c:59
> epqstate = 0x0
> qual = 0x0
> projInfo = 0x55601ef9d680
> #29 0x0000555fe8583f0e in ExecProcNode (node=0x55601ef58368) at
./build/../src/include/executor/executor.h:315
> No locals.
> #30 ExecNestLoop (pstate=<optimized out>) at
./build/../src/backend/executor/nodeNestloop.c:159
> node = <optimized out>
> nl = 0x55601b1224c8
> innerPlan = 0x55601ef58368
> outerPlan = <optimized out>
> outerTupleSlot = <optimized out>
> innerTupleSlot = <optimized out>
> joinqual = <optimized out>
> otherqual = <optimized out>
> econtext = 0x55601eda5b60
> lc = <optimized out>
> #31 0x0000555fe8586ce6 in ExecProcNode (node=0x55601eda5a58) at
./build/../src/include/executor/executor.h:315
> No locals.
> #32 ExecSort (pstate=0x55601eda5850) at
./build/../src/backend/executor/nodeSort.c:149
> plannode = <optimized out>
> outerNode = 0x55601eda5a58
> tupDesc = <optimized out>
> tuplesortopts = <optimized out>
> node = 0x55601eda5850
> estate = 0x55601b1a60a8
> dir = ForwardScanDirection
> tuplesortstate = 0x55601b15dfa8
> slot = <optimized out>
> #33 0x0000555fe856476c in ExecProcNode (node=0x55601eda5850) at
./build/../src/include/executor/executor.h:315
> No locals.
> #34 fetch_input_tuple (aggstate=aggstate(at)entry=0x55601eda5130) at
./build/../src/backend/executor/nodeAgg.c:563
> slot = <optimized out>
> #35 0x0000555fe8567ca9 in agg_retrieve_direct (aggstate=0x55601eda5130) at
./build/../src/backend/executor/nodeAgg.c:2450
> econtext = 0x55601eda5748
> firstSlot = 0x55601efa0970
> numGroupingSets = 1
> node = 0x55601b458fb8
> tmpcontext = <optimized out>
> peragg = 0x55601efa1f40
> outerslot = <optimized out>
> nextSetSize = <optimized out>
> pergroups = 0x55601efa2148
> result = <optimized out>
> hasGroupingSets = false
> currentSet = <optimized out>
> numReset = 1
> i = <optimized out>
> node = <optimized out>
> econtext = <optimized out>
> tmpcontext = <optimized out>
> peragg = <optimized out>
> pergroups = <optimized out>
> outerslot = <optimized out>
> firstSlot = <optimized out>
> result = <optimized out>
> hasGroupingSets = <optimized out>
> numGroupingSets = <optimized out>
> currentSet = <optimized out>
> nextSetSize = <optimized out>
> numReset = <optimized out>
> i = <optimized out>
> #36 ExecAgg (pstate=0x55601eda5130) at
./build/../src/backend/executor/nodeAgg.c:2265
> node = 0x55601eda5130
> result = 0x0
> #37 0x0000555fe8579bc9 in ExecProcNode (node=0x55601eda5130) at
./build/../src/include/executor/executor.h:315
> No locals.
> #38 ExecLimit (pstate=0x55601eda4e20) at
./build/../src/backend/executor/nodeLimit.c:95
> node = 0x55601eda4e20
> econtext = 0x55601eda5028
> direction = <optimized out>
> slot = <optimized out>
> outerPlan = 0x55601eda5130
> __func__ = "ExecLimit"
> #39 0x0000555fe855191b in ExecProcNode (node=0x55601eda4e20) at
./build/../src/include/executor/executor.h:315
> No locals.
> #40 ExecutePlan (queryDesc=0x55601b1a9f18, operation=CMD_SELECT,
sendTuples=true, numberTuples=0, direction=<optimized out>,
dest=0x55601aed55a0)
> at ./build/../src/backend/executor/execMain.c:1697
> estate = 0x55601b1a60a8
> use_parallel_mode = <optimized out>
> slot = <optimized out>
> planstate = 0x55601eda4e20
> current_tuple_count = 0
> estate = <optimized out>
> planstate = <optimized out>
> use_parallel_mode = <optimized out>
> slot = <optimized out>
> current_tuple_count = <optimized out>
> #41 standard_ExecutorRun (queryDesc=0x55601b1a9f18, direction=<optimized
out>, count=0) at ./build/../src/backend/executor/execMain.c:366
> estate = 0x55601b1a60a8
> operation = CMD_SELECT
> dest = 0x55601aed55a0
> sendTuples = <optimized out>
> oldcontext = 0x55601b0f7980
> #42 0x0000555fe872c2a7 in PortalRunSelect
(portal=portal(at)entry=0x55601afc2718, forward=forward(at)entry=true, count=0,
count(at)entry=9223372036854775807,
> dest=dest(at)entry=0x55601aed55a0) at
./build/../src/backend/tcop/pquery.c:921
> queryDesc = 0x55601b1a9f18
> direction = <optimized out>
> nprocessed = <optimized out>
> __func__ = "PortalRunSelect"
> #43 0x0000555fe872d8a0 in PortalRun (portal=portal(at)entry=0x55601afc2718,
count=9223372036854775807, isTopLevel=isTopLevel(at)entry=true,
dest=dest(at)entry=0x55601aed55a0,
> altdest=altdest(at)entry=0x55601aed55a0, qc=qc(at)entry=0x7ffe304161c0) at
./build/../src/backend/tcop/pquery.c:765
> _save_exception_stack = 0x7ffe304162a0
> _save_context_stack = 0x7ffe30416280
> _local_sigjmp_buf = {{__jmpbuf = {3, -100083351355759034,
93871257954072, 93871256982944, 0, 0, -100083352486123962,
-6061881521657190842},
> __mask_was_saved = 0, __saved_mask = {__val = {93870411175012,
1759917036, 832786, 140729708011496, 5232754935419077376, 140729708011568,
93870410174331,
> 0, 93870411648949, 93871262790800, 52352, 93871256981664,
93870411676182, 140729708011568, 3, 140729708011568}}}}
> _do_rethrow = <optimized out>
> result = <optimized out>
> nprocessed = <optimized out>
> saveTopTransactionResourceOwner = 0x55601af25b28
> saveTopTransactionContext = 0x55601afd83e0
> saveActivePortal = 0x0
> saveResourceOwner = 0x55601af25b28
> savePortalContext = 0x0
> saveMemoryContext = 0x55601aed50a0
> __func__ = "PortalRun"
> #44 0x0000555fe872a65b in exec_execute_message (portal_name=0x55601aed5198
"", max_rows=<optimized out>) at ./build/../src/backend/tcop/postgres.c:2272
> portal = 0x55601afc2718
> sourceText = 0x55601b6f9160 "-- NO KILL
\nselect\n\n\tt.*\n\t\n\nfrom\n\n\t(select\t\n\t\treport_id,\n\t\tshop_id,\t\t\t\t\n\t\t\n\t\tmax(uncalc_cnt)
as uncalc_cnt,\n\t\tmax(bad_cnt) as bad_cnt,\n\t\tjsonb_agg(row_to_json(t.*)
order by ordering_path) as kpis,\n\t\tm"...
> prepStmtName = 0x555fe88f7d3f "<unnamed>"
> was_logged = false
> cmdtaglen = 6
> dest = DestRemoteExecute
> completed = <optimized out>
> qc = {commandTag = CMDTAG_UNKNOWN, nprocessed = 0}
> portalParams = 0x55601b0f7a78
> save_log_statement_stats = false
> is_xact_command = false
> msec_str =
"0\371\003\000\000\000\000\000\360cA0\376\177\000\000\000\000\000\000\000\000\000\000\265\217\212\350_U\000"
> params_data = {portalName = 0x55601afc6100 "", params =
0x55601b0f7a78}
> params_errcxt = {previous = 0x0, callback = 0x555fe85e7c30
<ParamsErrorCallback>, arg = 0x7ffe304161d0}
> receiver = 0x55601aed55a0
> execute_is_fetch = false
> cmdtagname = <optimized out>
> lc = <optimized out>
> dest = <optimized out>
> receiver = <optimized out>
> portal = <optimized out>
> completed = <optimized out>
> qc = <optimized out>
> sourceText = <optimized out>
> prepStmtName = <optimized out>
> portalParams = <error reading variable portalParams (Cannot access
memory at address 0x0)>
> save_log_statement_stats = <optimized out>
> is_xact_command = <optimized out>
> execute_is_fetch = <optimized out>
> was_logged = <optimized out>
> msec_str = <optimized out>
> params_data = <optimized out>
> params_errcxt = <optimized out>
> cmdtagname = <optimized out>
> cmdtaglen = <optimized out>
> lc = <optimized out>
> __func__ = "exec_execute_message"
> __errno_location = <optimized out>
> lc__state = <optimized out>
> stmt = <optimized out>
> lc__state = <optimized out>
> stmt = <optimized out>
> __errno_location = <optimized out>
> __errno_location = <optimized out>
> __errno_location = <optimized out>
> __errno_location = <optimized out>
>

__
Best wishes,
Yuri Zamyatin

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message PG Bug reporting form 2025-10-09 11:43:01 BUG #19079: can't dowmload documentation download aborted
Previous Message Marco Boeringa 2025-10-09 07:40:03 Re: Potential "AIO / io workers" inter-worker locking issue in PG18?