Re: possible memory leak in VACUUM ANALYZE

From: Andres Freund <andres(at)anarazel(dot)de>
To: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: possible memory leak in VACUUM ANALYZE
Date: 2023-02-10 20:18:39
Message-ID: 20230210201839.qygmv7y2afbtl6jy@awork3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:
> Just a small note - I executed VACUUM ANALYZE on one customer's database,
> and I had to cancel it after a few hours, because it had more than 20GB RAM
> (almost all physical RAM).

Just to make sure: You're certain this was an actual memory leak, not just
vacuum ending up having referenced all of shared_buffers? Unless you use huge
pages, RSS increases over time, as a process touched more and more pages in
shared memory. Of course that couldn't explain rising above shared_buffers +
overhead.

> The memory leak is probably not too big. This database is a little bit
> unusual. This one database has more than 1 800 000 tables. and the same
> number of indexes.

If you have 1.8 million tables in a single database, what you saw might just
have been the size of the relation and catalog caches.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2023-02-10 20:23:11 Re: possible memory leak in VACUUM ANALYZE
Previous Message Andres Freund 2023-02-10 20:15:22 Re: Reconcile stats in find_tabstat_entry() and get rid of PgStat_BackendFunctionEntry