memory leak under heavy load?

From: hubert depesz lubaczewski <depesz(at)gmail(dot)com>
To: PostgreSQL General <pgsql-general(at)postgresql(dot)org>
Subject: memory leak under heavy load?
Date: 2005-11-29 16:58:45
Message-ID: 9e4684ce0511290858u70ee24abud712098f0b9a25b6@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

hi
i think i've encountered a bug in postgresql 8.1.
yet - i'm not reallty info submitting it to -bugs, as i have no way to
successfully redo it again.

basically
i have server, with dual opteron, 4g of memory, 2gb of swap. everything
working under centos 4.2.

postgresql 8.1 compiled from sources using:
./configure \
--prefix=/home/pgdba/work \
--without-debug \
--disable-debug \
--with-pgport=5810 \
--with-tcl \
--with-perl \
--with-python \
--without-krb4 \
--without-krb5 \
--without-pam \
--without-rendezvous \
--with-openssl \
--with-readline \
--with-zlib \
--with-gnu-ld

postgresql.conf looks like this (i removed commented lines):
listen_addresses = '*'
max_connections = 250
superuser_reserved_connections = 10
password_encryption = on
shared_buffers = 50000
temp_buffers = 1000
max_prepared_transactions = 250
work_mem = 10240
maintenance_work_mem = 131072
max_fsm_pages = 500000
max_fsm_relations = 5000
fsync = off
wal_buffers = 100
commit_delay = 1000
commit_siblings = 5
checkpoint_segments = 100
effective_cache_size = 196608
random_page_cost = 1.5
default_statistics_target = 50
log_destination = 'stderr'
redirect_stderr = on
log_directory = '/home/pgdba/logs/'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1440
log_rotation_size = 502400
log_min_duration_statement = 5000
log_connections = on
log_duration = off
log_line_prefix = '[%t] [%p] <%u(at)%d> '
log_statement = 'none'
stats_start_collector = on
stats_command_string = on
stats_block_level = on
stats_row_level = on
stats_reset_on_server_start = on
autovacuum = off
autovacuum_naptime = 60
autovacuum_vacuum_threshold = 1000
autovacuum_analyze_threshold = 500
autovacuum_vacuum_scale_factor = 0.4
autovacuum_analyze_scale_factor = 0.2
check_function_bodies = on
lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8'
custom_variable_classes = 'plperl'
plperl.use_strict = true

everything works nice,
but:
i run a loop of about 400 thousands inserts in transactions - two inserts in
a transaction.
in totaal i had nearly 200 000 transactions - all very fast, and with no (or
very little) time between them.
insert were made to 2 distinct tables, and (maybe that's important) about
99% failed because of "unique index violation".

what i say is that postmaster user started to "eat" memory.
it allocated *all* memory (both ram and swap), and then died.
load on the machine jumped to something around 20.

it is very strange for me. since next such run didn't break the postgres,
but then - another one did.
i am unable to replay the scenario with 100% guarantee it will crash the
backend.
as for the inserts - there were no triggers, no rules.
not even foreign keys. encodign is utf8.

is this something that anybody else encountered? what can i do to make it
possible to fix the problem?

and yes - i know thay i should do copy instead of inserts, and bigger
transactions, but i would like to have the problem fixed and not a
workaround. actually i did a woraround for now and it works - every 100
transactions i disconnect and reconnect again. this way it works.

depesz

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Brandon E Hofmann 2005-11-29 17:12:58 Question
Previous Message Sterpu Victor 2005-11-29 16:09:07 Re: sequence problem - many rows