Re: Problem with multi-job pg_restore

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Brian Weaver <cmdrclueless(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Problem with multi-job pg_restore
Date: 2012-05-01 16:59:00
Message-ID: 16092.1335891540@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Brian Weaver <cmdrclueless(at)gmail(dot)com> writes:
> I think I've discovered an issue with multi-job pg_restore on a 700 GB
> data file created with pg_dump.

Just to clarify, you mean parallel restore, right? Are you using any
options beyond -j, that is any sort of selective restore?

> The problem occurs during the restore when one of the bulk loads
> (COPY) seems to get disconnected from the restore process. I captured
> stdout and stderr from the pg_restore execution and there isn't a
> single hint of a problem. When I look at the log file in the
> $PGDATA/pg_log directory I found the following errors:

> LOG: could not send data to client: Connection reset by peer
> STATEMENT: COPY public.outlet_readings_rollup (id, outlet_id,
> rollup_interval, reading_time, min_current, max_current,
> average_current, min_active_power, max_active_power,
> average_active_power, min_apparent_power, max_apparent_power,
> average_apparent_power, watt_hour, pdu_id, min_voltage, max_voltage,
> average_voltage) TO stdout;

I'm confused. A copy-to-stdout ought to be something that pg_dump
would do, not pg_restore. Are you sure this is related at all?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Christopher Browne 2012-05-01 17:12:55 Re: extending relations more efficiently
Previous Message Jim Nasby 2012-05-01 16:57:39 Re: Memory usage during sorting