From: | Martin Povolny <martin(dot)povolny(at)solnet(dot)cz> |
---|---|
To: | tgl(at)sss(dot)pgh(dot)pa(dot)us |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: [ADMIN] large database: problems with pg_dump and pg_restore |
Date: | 2010-10-27 09:00:00 |
Message-ID: | E1PB1rW-0005Bs-De@ns.solnet.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
27.10.2010 tgl(at)sss(dot)pgh(dot)pa(dot)us napsal(a):
> =?utf-8?Q?Martin_Povolny?= <martin(dot)povolny(at)solnet(dot)cz> writes:
>> I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping: I
>> was unable to
>> make a dump in the default 'tar' format. I got this message:
>> pg_dump: [tar archiver] archive member too large for tar format
>
> This is expected: tar format has a documented limit of 8GB per table.
> (BTW, tar is not the "default" nor the recommended format, in part
> because of that limitation. The custom format is preferred unless
> you really *need* to manipulate the dump files with "tar" for some
> reason.)
Ok, I get it. Don't use the 'tar' format. I will not.
As to hitting the limit of 8 GB per table -- I have one really large table.
But if I dump the table separetely, I get:
pg_dump --verbose --host localhost --username bb --create --format tar --file
archiv5-process.dump --table process archiv5
-rw-r--r-- 1 root root 4879763968 2010-10-27 10:15 archiv5-process.dump
in other words: I am sure I did not hit the 8GB per table limit. But I am over 4GB
per table.
The 'process' table is the largest and is also the one where restore fails in both
cases (tar format and custom format).
>
>> for the bb.dump in the 'custom' format:
>> pg_restore: [vlastnà archiváÅ] unexpected end of file
>
> Hm, that's weird. I can't think of any explanation other than the dump
> file somehow getting corrupted. Do you get sane-looking output if you
> run "pg_restore -l bb.dump"?
Sure, I did pg_restore -l into a file and did not get any errors.
Then I commented out the already restored files and then tried restoring tables
behind the table 'process'.
But I got the same error message :-(
like this:
$ /usr/lib/postgresql/8.4/bin/pg_restore -l bb.dump > bb.list
# then edit bb.list, commenting out lines before and including table 'process',
saving into bb.list-post-process
$ /usr/lib/postgresql/8.4/bin/pg_restore --verbose --use-list bb.list-post-process
bb.dump > bb-list-restore.sql
pg_restore: restoring data for table "process_internet"
pg_restore: [custom archiver] unexpected end of file
pg_restore: *** aborted because of error
As to splitting the dump as suggested earlier in this thread -- I am sure my system
can work with files over 4 GB also I don't understand how spliting the output from
pg_dump would prevent the pg_dump from failing. But I can try that too.
Also I did not try the '-F plain' dump format.
I have stopped using the plain format in the past because I was getting output as if
I used --inserts atlhough I did not and I don't see any option for pg_dump, that
would force the use of COPY for dumping data. But that is several versions of
postgres back and I did not try this since that time.
Many thanks for your time and tips!
--
Mgr. Martin Povolný, soLNet, s.r.o.,
+420777714458, martin(dot)povolny(at)solnet(dot)cz
From | Date | Subject | |
---|---|---|---|
Next Message | Jehan-Guillaume (ioguix) de Rorthais | 2010-10-27 09:43:26 | Re: large database: problems with pg_dump and pg_restore |
Previous Message | mark | 2010-10-27 02:41:36 | Re: large database: problems with pg_dump and pg_restore |