Skip site navigation (1) Skip section navigation (2)

Re: Large file support needed? Trying to identify root of

From: "Scott Marlowe" <smarlowe(at)qwest(dot)net>
To: "Kris Kiger" <kris(at)musicrebellion(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Large file support needed? Trying to identify root of
Date: 2004-07-19 20:07:54
Message-ID: 1090267674.709.5.camel@localhost.localdomain (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
On Mon, 2004-07-19 at 13:28, Kris Kiger wrote:
> I've got a database that is a single table with 5 integers, a timestamp 
> with time zone, and a boolean.  The table is 170 million rows in length. 
>  The contents of the tar'd dump file it produced using:
>     pg_dump -U postgres -Ft test > test_backup.tar
> is: 8.dat (approximately 8GB), a toc, and restore.sql.  
> No errors are reported on dump, however, when a restore is attempted I get:
> ERROR:  unexpected message type 0x58 during COPY from stdin
> CONTEXT:  COPY test_table, line 86077128: ""
> ERROR:  could not send data to client: Broken pipe
> CONTEXT:  COPY test_table, line 86077128: ""
> I am doing the dump & restore on the same machine.
> Any ideas?  If the file is too large, is there anyway postgres could 
> break it up into smaller chunks for the tar when backing up?  Thanks for 
> the help!

How, exactly, are you restoring?  Doing things like:

cat file|pg_restore ...

can cause problems because cat is often limited to 2 gigs on many OSes. 
Just use a redirect:

psql dbname <file

In response to

pgsql-admin by date

Next:From: Bruce MomjianDate: 2004-07-19 20:24:52
Subject: Re: [HACKERS] Point in Time Recovery
Previous:From: Scott MarloweDate: 2004-07-19 20:06:53

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group