I think I've solved this problem. It involved the vacuumdb command
rather than the large insert. I found this out when a nightly vacuumdb
script reported the problem even though no records had been loaded at all.
My postgresql.conf file had a 10 second timeout and the large database
required more than 10 seconds for the vacuum. It seems that this left
postmaster and/or the particular database in a state where any SQL
against that database gave the same error response about being canceled
by the user.
I'm guessing postgresql had left some internal state set for the
database and was trying to complete the vacuum first. Anyway, I set
the statement_timeout = 0 and the problem isn't recurring.
"Paul B. Anderson" <paul(at)pnlassociates(dot)com> writes:
>> Ulimit says
>> # su - postgres
>> -bash-2.05b$ ulimit -a
>> core file size (blocks, -c) 0
>> data seg size (kbytes, -d) unlimited
>> file size (blocks, -f) unlimited
>> max locked memory (kbytes, -l) 4
>> max memory size (kbytes, -m) unlimited
>> open files (-n) 1024
>> pipe size (512 bytes, -p) 8
>> stack size (kbytes, -s) 10240
>> cpu time (seconds, -t) unlimited
>> max user processes (-u) 3072
>> virtual memory (kbytes, -v) unlimited
>> None of the actual limits seems to be a problem. I can't seem to change
>> max locked memory using ulimit but I'll try increasing stack size.
>> Could that be involved?
No, the 'file size' and 'cpu time' limits are the ones I'd expect to
cause this sort of behavior.
One possible gotcha is that depending on how you start the postmaster,
it might be living under different ulimit settings than what you see
regards, tom lane
pgsql-admin by date
|Next:||From: Tom Lane||Date: 2004-11-11 21:07:35|
|Subject: Re: Large transaction problem |
|Previous:||From: Andrei Bintintan||Date: 2004-11-11 08:22:34|
|Subject: Re: How to support German, French and other characters.|