pgbench out of memory error

From: Jeff Ross <jross(at)wykids(dot)org>
To: PostgreSQL <pgsql-general(at)postgresql(dot)org>
Subject: pgbench out of memory error
Date: 2010-01-05 22:49:28
Message-ID: 4B43C210.7060708@wykids.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

I'm trying to put a new server on line and I'm having a problem getting any
kind of decent performance from it. pgbench yields around 4000 tps until
scale and clients both are above 21, then I see the following:

...
2180000 tuples done.
2190000 tuples done.
2200000 tuples done.
set primary key...
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pgbench_branches_pkey" for table "pgbench_branches"
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pgbench_tellers_pkey" for table "pgbench_tellers"
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pgbench_accounts_pkey" for table "pgbench_accounts"
ERROR: out of memory
DETAIL: Failed on request of size 67108864.

The primary key index is not created and that takes the tps to around 2, if I
let it run overnight.

I'm using this script to run pgbench from another machine on the lan:

#!/bin/sh
#use this to automate pg_bench load
#scale and clients are the same number
#increment by one on each loop
MAX_SCALE=25
HOST=varley.openvistas.net
SCALE=22

while [ $SCALE -ne $MAX_SCALE ] ; do
pgbench -i -s $SCALE -h $HOST pgbench
vacuumdb --analyze -h $HOST pgbench
psql -h $HOST -c "checkpoint;" pgbench
time pgbench -h $HOST -t 2000 -c $SCALE -S pgbench
time pgbench -h $HOST -t 2000 -c $SCALE -S pgbench
time pgbench -h $HOST -t 2000 -c $SCALE -S pgbench
let SCALE=$SCALE+1
done

The server itself is a 2.4GHz dual processor Xeon with 4GB of ram running
OpenBSD -current. For disks I have 6 15k U320 scsi disks set up as 3 sets of
RAID1. The controller is an LSI MegaRAID 320-1LP. It has 64MB or ram onboard
and I've installed the BBU. The controller cache is set to write thru for all
three volumes because tests using dd and bonnie++ show that write thru is
twice as fast as write back. I haven't dug into that any more to figure out why.

I've used pgtune to help configure postgresql, and I'm using these values in
postgresql.conf. All other settings are at the default value.

listen_addresses = '*' # what IP address(es) to listen on;
unix_socket_directory = '/var/postgresql/' # (change requires
restart)
checkpoint_completion_target = 0.7 # pgtune wizard 2009-11-17
checkpoint_segments = 8 # pgtune wizard 2009-11-17
maintenance_work_mem = 240MB # pgtune wizard 2009-12-14
effective_cache_size = 2816MB # pgtune wizard 2009-12-14
work_mem = 18MB # pgtune wizard 2009-12-14
wal_buffers = 4MB # pgtune wizard 2009-12-14
shared_buffers = 960MB # pgtune wizard 2009-12-14
max_connections = 200 # pgtune wizard 2009-12-14
#archive_mode = on # allows archiving to be done
archive_command = 'cp -i %p /wal/5432/%f < /dev/null' # command to use to
archive a logfile segment
archive_timeout = 0 # force a logfile segment switch after this
number of seconds; 0 disables
log_statement = 'all' # none, ddl, mod, all
log_line_prefix = '<%u%%%d> ' # special values:
datestyle = 'iso, mdy'

(I had archive command = on before but I kept filling up my 5GB /wal partition
so I turned it off. Logging was none but I set it to all to try to get some
more detail about the error.)

Additionally, I have used sysctl to adjust some memory values:
kern.seminfo.semmni=4096
kern.seminfo.semmns=9082
kern.shminfo.shmall=128000
kern.shminfo.shmmax=2048000000

I hope that's enough information to help someone point me in the right
direction. Even before I get to the out of memory error 4000 tps seems very
low for what I think this hardware should be capable of doing.

Thanks,

Jeff Ross

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Craig Ringer 2010-01-05 23:33:23 Re: PostgreSQL Write Performance
Previous Message Andres Freund 2010-01-05 22:49:13 Re: PostgreSQL Write Performance