Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Sergey Klochkov <klochkov(at)iqbuzz(dot)ru>
Cc: pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Date: 2013-10-01 12:49:33
Message-ID: CABUevEz3E8zyUVgGw99hbpVNW-z8CP0DsE2y0mNvGWrN8g+fVQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin pgsql-sql

On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov(at)iqbuzz(dot)ru> wrote:
> Hello All,
>
> While trying to backup a database of relatively modest size (160 Gb) I ran
> into the following issue:
>
> When I run
> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>
> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
> on). pg_dump just begins to consume memory until it eats up all avaliable
> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
> killer.
>
> According to pg_stat_activity, pg_dump runs the following query
>
> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
> AS rolname, lomacl FROM pg_largeobject_metadata
>
> until it is killed.
>
> strace shows that pg_dump is constantly reading a large amount of data from
> a UNIX socket. I suspect that it is the result of the above query.
>
> There are >300000000 large objects in the database. Please don't ask me why.
>
> I tried googling on this, and found mentions of pg_dump being killed by oom
> killer, but I failed to find anything related to the huge large objects
> number.
>
> Is there any method of working around this issue?

I think this problem comes from the fact that pg_dump treats each
large object as it's own item. See getBlobs() which allocates a
BlobInfo struct for each LO (and a DumpableObject if there are any,
but that's just one).

I assume the query (from that file):
SELECT oid, lomacl FROM pg_largeobject_metadata

returns 300000000 rows, which are then looped over?

I ran into a similar issue a few years ago with a client using a
32-bit version of pg_dump, and got it worked around by moving to
64-bit. Did unfortunately not have time to look at the underlying
issue.

--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Viktor 2013-10-01 13:19:12 Random server overload
Previous Message bricklen 2013-10-01 12:30:18 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

Browse pgsql-sql by date

  From Date Subject
Next Message Alejandro Brust 2013-10-01 19:30:15 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Previous Message bricklen 2013-10-01 12:30:18 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects