From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PHJ file leak. |
Date: | 2019-11-11 22:18:39 |
Message-ID: | CA+hUKG+EWbjjxVHNdBZ5AYd-JKJnyZ7RiJ+eSPaQsa1vzjthTQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Nov 12, 2019 at 1:24 AM Kyotaro Horiguchi
<horikyota(dot)ntt(at)gmail(dot)com> wrote:
> Hello. While looking a patch, I found that PHJ sometimes complains for
> file leaks if accompanied by LIMIT.
Oops.
> Repro is very simple:
>
> create table t as (select a, a as b from generate_series(0, 999999) a);
> analyze t;
> select t.a from t join t t2 on (t.a = t2.a) limit 1;
>
> Once in several (or dozen of) times execution of the last query
> complains as follows.
>
> WARNING: temporary file leak: File 15 still referenced
> WARNING: temporary file leak: File 17 still referenced
Ack. Reproduced here.
> This is using PHJ and the leaked file was a shared tuplestore for
> outer tuples, which was opend by sts_parallel_scan_next() called from
> ExecParallelHashJoinOuterGetTuple(). It seems to me that
> ExecHashTableDestroy is forgeting to release shared tuplestore
> accessors. Please find the attached.
Thanks for the patch! Yeah, this seems correct, but I'd like to think
about it some more before committing. I'm going to be a bit tied up
with travel so that might be next week.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-11-11 22:24:45 | Re: PHJ file leak. |
Previous Message | Tom Lane | 2019-11-11 21:41:41 | Missing dependency tracking for TableFunc nodes |