"beer" <beer(at)cmu(dot)edu> writes:
> We have a medium sized database that when dumped creates +4G files
> within the tar archive. When we restore it seems that pg_restore has
> a 4G limit for reading files, once it reads 4G of a file, it moves on
> to the next file. Has anyone else experienced this problem?
There is a member size limit inherent to the tar-archive code, although
I thought it was 8G not 4G. I'd recommend using custom format (-Fc not
-Ft).
Still, if the thing is truncating your data and not telling you so,
that'd qualify as a bug.
On some platforms there might be a problem with lack of large-file
support at the stdio level, too. What is your platform?
regards, tom lane