From: | Chul-Su Park <pcs(at)bsunsrv1(dot)kek(dot)jp> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Cc: | Chul-Su Park <pcs(at)bsunsrv1(dot)kek(dot)jp> |
Subject: | [Q] 6.3.2->6.5.1, pg_dump with large object & backend cache... |
Date: | 1999-08-19 16:33:42 |
Message-ID: | 19990820013342.A26893@bsunsrv1.kek.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
We are using v6.3.2 with patches, and using many tables which
uses 'large object'. Main problem with it is
(1) cannot dump large object, is this still true in v6.5.1 ?
or no plan for immplementing dumping blobs in near future?
(2) if I want to clone dbs from linux to solaris machine, due to the above
lo & pg_dump problem, lots of manual works needed to dump dbs
-> restore to another architecture machine... Is there any utility
to ease duplication(backup) of dbs?
(3) to upgrade v6.3.2 dbs to v6.5.1, including large objects, is there
ways to dump/restore?
Another problem with v6.3.2 is frequent messages(error?) related to
the backend cache invalidation failure -- probably posted many times...
like this:
NOTICE: SIAssignBackendId: discarding tag 2147430138
Connection databese 'request' failed.
FATAL 1: Backend cache invalidation initialization failed
(1) Increasing max connection # from 32 to 64 in
src/include/storage/sinvaladt.h will simply fix above problem?
(2) If I want to keep v6.3.2, which PATCH will FIX above problem?
(3) already fixed in v6.5.1?
Best Regards,
C.S.Park
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Tong | 1999-08-19 18:53:51 | Character Constants |
Previous Message | Hub.Org News Admin | 1999-08-19 16:14:52 |