Re: parallel dump fails to dump large tables

From: Shanker Singh <ssingh(at)iii(dot)com>
To: Shanker Singh <ssingh(at)iii(dot)com>, Sterfield <sterfield(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "rod(at)iol(dot)ie" <rod(at)iol(dot)ie>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: parallel dump fails to dump large tables
Date: 2015-02-25 20:36:47
Message-ID: 961471F4049EF94EAD4D0165318BD88162590899@Corp-MBXE3.iii.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

There is no problem dumping large tables using parallel dump. My script had limit on the file size that was causing parallel dump to abort on large tables. Thanks everyone for their valuable suggestion.

Thanks
shanker

From: Shanker Singh
Sent: Monday, February 23, 2015 6:18 PM
To: Sterfield
Cc: Tom Lane; rod(at)iol(dot)ie; pgsql-general(at)postgresql(dot)org; Shanker Singh
Subject: RE: [GENERAL] parallel dump fails to dump large tables

I tried dumping the largest table that is having problem using –j1 flag in parallel dump. This time I got error on the console “File size limit exceeded” but the system allows
Unlimited file size. Also the pg_dump without –j flag goes through fine. Do you guys know what’s going on with parallel dump? The system is 64 bit centos(
2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux) with ext4 file system.

limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 10240 kbytes
coredumpsize 0 kbytes
memoryuse unlimited
vmemoryuse unlimited
descriptors 25000
memorylocked 64 kbytes
maxproc 1024

From: Sterfield [mailto:sterfield(at)gmail(dot)com]
Sent: Sunday, February 22, 2015 8:50 AM
To: Shanker Singh
Cc: Tom Lane; rod(at)iol(dot)ie<mailto:rod(at)iol(dot)ie>; pgsql-general(at)postgresql(dot)org<mailto:pgsql-general(at)postgresql(dot)org>
Subject: Re: [GENERAL] parallel dump fails to dump large tables

2015-02-20 14:26 GMT-08:00 Shanker Singh <ssingh(at)iii(dot)com<mailto:ssingh(at)iii(dot)com>>:
I tried turning off ssl renegotiation by setting "ssl_renegotiation_limit = 0" in postgresql.conf but it had no effect. The parallel dump still fails on large tables consistently.

Thanks
Shanker

HI,
Maybe you could try to setup an SSH connection between the two servers, with a keepalive option, and left it open for a long time (at least the duration of your backup), just to test if your ssh connection is still being cut after some time.
That way, you will be sure if the problem is related to SSH or related to Postgresql.
Thanks,
Guillaume

-----Original Message-----
From: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us<mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us>]
Sent: Saturday, February 14, 2015 9:00 AM
To: rod(at)iol(dot)ie<mailto:rod(at)iol(dot)ie>
Cc: Shanker Singh; pgsql-general(at)postgresql(dot)org<mailto:pgsql-general(at)postgresql(dot)org>
Subject: Re: [GENERAL] parallel dump fails to dump large tables
"Raymond O'Donnell" <rod(at)iol(dot)ie<mailto:rod(at)iol(dot)ie>> writes:
> On 14/02/2015 15:42, Shanker Singh wrote:
>> Hi,
>> I am having problem using parallel pg_dump feature in postgres
>> release 9.4. The size of the table is large(54GB). The dump fails
>> with the
>> error: "pg_dump: [parallel archiver] a worker process died
>> unexpectedly". After this error the pg_dump aborts. The error log
>> file gets the following message:
>>
>> 2015-02-09 15:22:04 PST [8636]: [2-1]
>> user=pdroot,db=iii,appname=pg_dump
>> STATEMENT: COPY iiirecord.varfield (id, field_type_tag, marc_tag,
>> marc_ind1, marc_ind2, field_content, field_group_id, occ_num,
>> record_id) TO stdout;
>> 2015-02-09 15:22:04 PST [8636]: [3-1]
>> user=pdroot,db=iii,appname=pg_dump
>> FATAL: connection to client lost

> There's your problem - something went wrong with the network.

I'm wondering about SSL renegotiation failures as a possible cause of the disconnect --- that would explain why it only happens on large tables.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org<mailto:pgsql-general(at)postgresql(dot)org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tong Pham 2015-02-25 20:38:26 Re: [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables
Previous Message Jeremy Harris 2015-02-25 19:51:06 Re: Some indexing advice for a Postgres newbie, please?