From: | "Euler Taveira" <euler(at)eulerto(dot)com> |
---|---|
To: | "Tomas Vondra" <tomas(dot)vondra(at)enterprisedb(dot)com>, "Thomas Munro" <thomas(dot)munro(at)gmail(dot)com>, "Michael Paquier" <michael(at)paquier(dot)xyz> |
Cc: | "Euler Taveira" <euler(dot)taveira(at)2ndquadrant(dot)com>, "Anastasia Lubennikova" <a(dot)lubennikova(at)postgrespro(dot)ru>, "Tomas Vondra" <tomas(dot)vondra(at)2ndquadrant(dot)com>, "PostgreSQL Hackers" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: cleanup temporary files after crash |
Date: | 2021-03-18 21:00:14 |
Message-ID: | 34226249-c5ba-4780-a1bb-f3433d19716e@www.fastmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Mar 18, 2021, at 5:51 PM, Tomas Vondra wrote:
> OK. Can you prepare a patch with the proposed test approach?
I'm on it.
> FWIW I can reproduce this on a 32-bit ARM system (rpi4), where 500 rows
> simply does not use a temp file, and with 1000 rows it works fine. On
> the x86_64 the temp file is created even with 500 rows. So there clearly
> is some platform dependency, not sure if it's due to 32/64 bits,
> alignment or something else. In any case, the 500 rows seems to be just
> on the threshold.
>
> We need to do both - stop using the timing and increase the number of
> rows, to consistently get temp files.
Yeah.
--
Euler Taveira
EDB https://www.enterprisedb.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2021-03-18 21:02:53 | Re: GROUP BY DISTINCT |
Previous Message | Tomas Vondra | 2021-03-18 20:51:11 | Re: cleanup temporary files after crash |