> pg_dump: lost synchronization with server: got message type "4", length
> pg_dump: SQL command to dump the contents of table "aggr_max_hour"
> failed: PQendcopy() failed.
> pg_dump: Error message from server: lost synchronization with server:
> got message type "4", length 858863113
> pg_dump: The command was: COPY messungen.aggr_max_hour (id, von, wert,
> counts) TO stdout;
>>> However, the error occurs only on that kind of aggregation tables.
>>> There is a cron-job keeping the tables up to date, starting all 10
>>> minutes. The job does delete and inserts on the table. Could this
>>> somehow block the dump process? Normally it should not?
>> It's hard to see how another backend would have anything to do with
>> this, unless perhaps the error is dependent on a particular data value
>> that is sometimes present in the table and sometimes not. It looks to
>> me like either libpq or the backend is miscounting the number of data
>> bytes in the COPY datastream. Would it be possible for you to use a
>> packet sniffer to capture the communication between pg_dump and the
>> backend? If we could look at exactly what's going over the wire, it
>> would help to pin down the blame.
> Well, I never used a packet sniffer but if anyone tells me what to
> capture, I'd be glad to.
In order to capture the pg_dump traffic with tcpdump we changed the
pg_dump command from
pg_dump DB > ...
pg_dump -h 127.0.0.2 DB > ...
Since that time the dump completed without any errors. Before using
local connection pg_dump failed to about 50%.
Anyway, we can live with that solution. But still I'd be curious to know
what is the problem with local connections. Is there a way to capture
traffic of local socket connections?
Dr. Stefan Holzheu
BITOeK Wiss. Sekretariat
In response to
pgsql-admin by date
|Next:||From: Tom Lane||Date: 2004-02-25 14:42:43|
|Subject: Re: Problems with pg_dump |
|Previous:||From: Rajesh Kumar Mallah||Date: 2004-02-25 01:30:14|
|Subject: Re: Idle processes won't die|