Skip site navigation (1) Skip section navigation (2)

Re: Problem with multi-job pg_restore

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Brian Weaver <cmdrclueless(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Problem with multi-job pg_restore
Date: 2012-05-01 16:59:00
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Brian Weaver <cmdrclueless(at)gmail(dot)com> writes:
> I think I've discovered an issue with multi-job pg_restore on a 700 GB
> data file created with pg_dump.

Just to clarify, you mean parallel restore, right?  Are you using any
options beyond -j, that is any sort of selective restore?

> The problem occurs during the restore when one of the bulk loads
> (COPY) seems to get disconnected from the restore process. I captured
> stdout and stderr from the pg_restore execution and there isn't a
> single hint of a problem. When I look at the log file in the
> $PGDATA/pg_log directory I found the following errors:

> LOG:  could not send data to client: Connection reset by peer
> STATEMENT:  COPY public.outlet_readings_rollup (id, outlet_id,
> rollup_interval, reading_time, min_current, max_current,
> average_current, min_active_power, max_active_power,
> average_active_power, min_apparent_power, max_apparent_power,
> average_apparent_power, watt_hour, pdu_id, min_voltage, max_voltage,
> average_voltage) TO stdout;

I'm confused.  A copy-to-stdout ought to be something that pg_dump
would do, not pg_restore.  Are you sure this is related at all?

			regards, tom lane

In response to


pgsql-hackers by date

Next:From: Christopher BrowneDate: 2012-05-01 17:12:55
Subject: Re: extending relations more efficiently
Previous:From: Jim NasbyDate: 2012-05-01 16:57:39
Subject: Re: Memory usage during sorting

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group