From: | Florian Weimer <fweimer(at)redhat(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: 9.3.2 server creates hundreds of thousands of temporary files |
Date: | 2014-02-03 13:44:01 |
Message-ID: | 52EF9D21.9090100@redhat.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 01/22/2014 06:56 PM, Tom Lane wrote:
> Florian Weimer <fweimer(at)redhat(dot)com> writes:
>> I've got a query which causes PostgreSQL to create hundreds of thousands
>> of temporary files, many of them empty. The process also needs a lot of
>> memory. I suspect this is due to bookkeeping for those files.
>
>> The query looks like this:
>> [ huge hash join ]
>
>> I track this down to a lower-than-usual setting of work_mem, to 1MB,
>> after the upgrade to 9.3.
>
> The system is trying to do the join with only 1MB of workspace, so
> yes, you end up with lots and lots of small temporary files.
>
>> Is this a bug?
>
> No.
It's still quite surprising that this temporarily needs multiple
gigabytes of RAM, much more than what's required in in terms of work_mem
to make this query run quickly.
Is there an easy way to check if there is a memory leak in the file
descriptor switching code confined to the current transaction, or
something like that? It seems a bit unlikely that the per-file
bookkeeping overhead is larger than 10 KB.
--
Florian Weimer / Red Hat Product Security Team
From | Date | Subject | |
---|---|---|---|
Next Message | Luca Ferrari | 2014-02-03 13:46:57 | Re: In a moment of madness I destroyed... |
Previous Message | Rajni Baliyan | 2014-02-03 12:59:32 | Re: postgres FDW cost estimation options unrecognized in 9.3-beta1 |