I've investigated the problem further, and here are my findings. I believe
this will be enough to allow you to provide a permanent fix to this problem.
To recap: we get an unusual error message from Postgres when storing a lot
of data (hundreds of megabytes) on a machine that is running Windows 2000
and doesn't have all that much memory (384MB).
What happens is this: apparently, when both the client and the server of a
TCP connection are on the same machine, Windows (at least, this version of
it) will allocate as much memory as it needs to hold the data. Since my
client is writing much faster than Postgres can store the data, that buffer
grows until memory runs out. At that point, instead of returning the normal
error code for "out of memory", Windows returns an unusual and non-standard
- The buffer actually used by Windows is much larger than the one specified
by TCP/IP - I'm guessing, for such a local connection, it is simply made as
large as it needs to be to hold all the data.
- If there is too much data a non-standard error code is returned.
I solved the problem by simply waiting for a few seconds when I get back the
"no buffer space available" message from Postgres, and retrying the command.
This gives Postgres time to do its work, and allows the buffer to shrink to
acceptable proportions. Obviously it would be appreciated if Postgres
actually did that for me ;-)
To reproduce, I simply allocate most of the memory available on the machine
(be sure to force it to be committed by writing to it), and then send a
large number of large inserts. This causes the buffer to overflow, thus
triggering the weird error.
I hope this helps in solving this issue. If you have any further questions,
please let me know.
NL-2316 XG, Leiden
Phone: +31 (0)71 52 40 835
Telefax: +31 (0)71 51 43 277
----- Original Message -----
From: "John R Pierce" <pierce(at)hogranch(dot)com>
To: "H. Guijt" <hg(at)terma(dot)com>; <pgsql-bugs(at)postgresql(dot)org>
Sent: Wednesday, March 08, 2006 10:35 PM
Subject: Re: [BUGS] BUG #2305: "No bufferspace available" error on large
>> Upon issuing the insert statement I receive the following error message:
>> "could not send data to server: No buffer space available
>> I suspect that this happens because we are inserting a lot of data: the
>> first BYTEA column is about 1.8MB, while the second is about 0.5MB.
>> - The same operation completes without problems when running on other
>> machines (which are all running Windows XP and the same Postgres
> are the various tuning parameters in $PGDATA/postgresql.conf set the same
> on these various systems ?
In response to
pgsql-bugs by date
|Next:||From: Peter Brant||Date: 2006-04-25 14:46:39|
|Subject: Re: BUG #2371: database crashes with semctl failed|
|Previous:||From: Paul van der Zwan||Date: 2006-04-25 07:31:46|
|Subject: Re: BUG #2406: Not all systems support SHM_SHARE_MMU|