Using : PostgreSQL 6.5.3 on i686-pc-linux-gnu
Platform : Linux (Red Hat 6.0)
3 errors :
pg_dump medias > medias.pgdump280301
pqWait() -- connection not open
PQendcopy: resetting connection
SQL query to dump the contents of Table 'dossier' did not execute correctly.
After we read all the table contents from the backend, PQendcopy() failed.
Explanation from backend: 'pqWait() -- connection not open
The query was: 'COPY "dossier" TO stdout;
NOTICE: Rel dossier: Uninitialized page 28 - fixing
NOTICE: Rel dossier: Uninitialized page 29 - fixing
NOTICE: BlowawayRelationBuffers(dossier, 28): block 28 is dirty (private 0,
last 0, global 0)
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
We have lost the connection to the backend, so further processing is
on my website (using php) :
Warning: PostgreSQL query failed: ERROR: Tuple is too big: size 9968 in
/xxxxxxx/enregistrer3.php on line 45
1 & 2 seem to be ok, because right now i can do a pg_dump without the error,
but I've searched in the mailing-list for my question (3) and this is what
i came with:
i have 3 options - which one would you recommand me ?
- change the default size block in include/config.h (recompile - hum...)
- use large object interface (what is that?)
- upgrade to 7.1 (fear to lost my data - i'm not a linux guru)
patrick, montreal, canada
pgsql-novice by date
|Next:||From: Lars Forseth||Date: 2001-03-29 19:44:57|
|Subject: Help on Postgres and locale - norwegian characters are not possible in my SuSE 7.0 stock install|
|Previous:||From: Vijay Deval||Date: 2001-03-29 16:20:17|
|Subject: Re: Query performance question|