KaiGai Kohei <kaigai(at)ak(dot)jp(dot)nec(dot)com> wrote:
> > I don't think this is necessarily a good idea. We might decide to treat
> > both things separately in the future and it having them represented
> > separately in the dump would prove useful.
> I agree. From design perspective, the single section approach is more
> simple than dual section, but its change set is larger than the dual.
When I tested a custom dump with pg_restore, --clean & --single-transaction
will fail with the new dump format because it always call lo_unlink()
even if the large object doesn't exist. It comes from dumpBlobItem:
! dumpBlobItem(Archive *AH, BlobInfo *binfo)
! appendPQExpBuffer(dquery, "SELECT lo_unlink(%s);\n", binfo->dobj.name);
The query in DropBlobIfExists() could avoid errors -- should we use it here?
| SELECT lo_unlink(oid) FROM pg_largeobject_metadata WHERE oid = %s;
BTW, --clean option is ambiguous if combined with --data-only. Restoring
large objects fails for the above reason if previous objects don't exist,
but table data are restored *without* truncation of existing data. Will
normal users expect TRUNCATE-before-load for --clean & --data-only cases?
Present behaviors are;
Table data - Appended. (--clean is ignored)
Large objects - End with an error if object doesn't exist.
IMO, ideal behaviors are:
Table data - Truncate existing data and load new ones.
Large objects - Work like as MERGE (or REPLACE, UPSERT).
NTT Open Source Software Center
In response to
pgsql-hackers by date
|Next:||From: Boszormenyi Zoltan||Date: 2010-02-09 11:27:05|
|Subject: ERROR: could not load library "...": Exec format error|
|Previous:||From: Leonardo F||Date: 2010-02-09 10:49:23|
|Subject: I: About "Our CLUSTER implementation is pessimal" patch|