| From: | Philip Crotwell <crotwell(at)seis(dot)sc(dot)edu> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | pgdump, large objects and 7.0->7.1 |
| Date: | 2001-03-16 22:00:51 |
| Message-ID: | Pine.GSO.4.10.10103161643130.439-100000@tigger.seis.sc.edu |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Hi
I am having problems with large objects in 7.0.3, high disk usage, slow
access and deletes of large objects and occasional selects that hang with
the background process going to 98% of the CPU and staying there. Having
read that there are alot of large object improvements in 7.1, I was
thinking of trying the beta out to see if these problems would disappear.
But, 7.0->7.1 needs a pgdumpall/restore. Which wouldn't be a problem, but
pgdumpall in 7.0 doesn't dump large objects. :(
So, 3 questions that basically boil down to "What is the best way to move
large objects from 7.0 to 7.1."
1) Can I use the 7.1 pgdumpall to dump a 7.0.3 database? The docs say no,
but worth a try.
2) What does "large objects... must be handled manually" in the 7.0 pgdump
docs mean? Does this mean that there is a way to manually copy the
xinvXXXX files? I have ~23000 of them at present.
3) Do I need to preserve oid's when with pgdump using large objects?
thanks,
Philip
PS It would be great if something about this could be added to the 7.1
docs. I would guess that others will have this same problem when 7.1 is
released.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jeff Williams | 2001-03-16 22:04:58 | Installation on Windows 2000 |
| Previous Message | Alex Howansky | 2001-03-16 21:53:22 | Re: Re: Fast Inserts and Hardware Questions |