PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

From: Sergey Klochkov <klochkov(at)iqbuzz(dot)ru>
To: pgsql-admin(at)postgresql(dot)org
Subject: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Date: 2013-10-01 09:07:29
Message-ID: 524A90D1.5000109@iqbuzz.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin pgsql-sql

Hello All,

While trying to backup a database of relatively modest size (160 Gb) I
ran into the following issue:

When I run
$ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

File /path/to/mydb.dmp does not appear (yes, I've checked permissions
and so on). pg_dump just begins to consume memory until it eats up all
avaliable RAM (96 Gb total on server, >64 Gb available) and is killed by
the oom killer.

According to pg_stat_activity, pg_dump runs the following query

SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =
lomowner) AS rolname, lomacl FROM pg_largeobject_metadata

until it is killed.

strace shows that pg_dump is constantly reading a large amount of data
from a UNIX socket. I suspect that it is the result of the above query.

There are >300000000 large objects in the database. Please don't ask me why.

I tried googling on this, and found mentions of pg_dump being killed by
oom killer, but I failed to find anything related to the huge large
objects number.

Is there any method of working around this issue?

Thanks in advance.

OS: CentOS 6
PostgreSQL version: 9.2.1
96 Gb RAM

PostgreSQL configuration:

listen_addresses = '*' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 500 # (change requires restart)
shared_buffers = 16GB # min 128kB
temp_buffers = 64MB # min 800kB
work_mem = 512MB # min 64kB
maintenance_work_mem = 30000MB # min 1MB
checkpoint_segments = 70 # in logfile segments, min 1,
16MB each
effective_cache_size = 50000MB
logging_collector = on # Enable capturing of stderr and
csvlog
log_directory = 'pg_log' # directory where log files are
written,
log_filename = 'postgresql-%a.log' # log file name pattern,
log_truncate_on_rotation = on # If on, an existing log file of the
log_rotation_age = 1d # Automatic rotation of logfiles
will
log_rotation_size = 0 # Automatic rotation of logfiles
will
log_min_duration_statement = 5000
log_line_prefix = '%t' # special values:
autovacuum = on # Enable autovacuum subprocess?
'on'
log_autovacuum_min_duration = 0 # -1 disables, 0 logs all
actions and
autovacuum_max_workers = 5 # max number of autovacuum
subprocesses
autovacuum_naptime = 5s # time between autovacuum runs
autovacuum_vacuum_threshold = 25 # min number of row updates before
autovacuum_vacuum_scale_factor = 0.1 # fraction of table size before
vacuum
autovacuum_vacuum_cost_delay = 7ms # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 1500 # default vacuum cost limit for
datestyle = 'iso, dmy'
lc_monetary = 'ru_RU.UTF-8' # locale for monetary
formatting
lc_numeric = 'ru_RU.UTF-8' # locale for number
formatting
lc_time = 'ru_RU.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.russian'

--
Sergey Klochkov
klochkov(at)iqbuzz(dot)ru

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Jov 2013-10-01 10:04:13 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Previous Message hugh holston 2013-09-30 23:22:53 openclinica

Browse pgsql-sql by date

  From Date Subject
Next Message Jov 2013-10-01 10:04:13 Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Previous Message Adrian Klaver 2013-09-30 23:36:01 Re: openclinica