From: | Andy Colson <andy(at)squeakycode(dot)net> |
---|---|
To: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
Cc: | pgsql <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_dump out of memory |
Date: | 2018-07-04 14:38:04 |
Message-ID: | 0d202a74-fed9-6818-557c-7cb2fc5fa22c@squeakycode.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 07/04/2018 12:31 AM, David Rowley wrote:
> On 4 July 2018 at 14:43, Andy Colson <andy(at)squeakycode(dot)net> wrote:
>> I moved a physical box to a VM, and set its memory to 1Gig. Everything
>> runs fine except one backup:
>>
>>
>> /pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep
>>
>> g_dump: Dumping the contents of table "ofrrds" failed: PQgetResult() failed.
>> pg_dump: Error message from server: ERROR: out of memory
>> DETAIL: Failed on request of size 1073741823.> pg_dump: The command was: COPY public.ofrrds (id, updateddate, bytes) TO
>> stdout;
>
> There will be less memory pressure on the server if the pg_dump was
> performed from another host. When running pg_dump locally the 290MB
> bytea value will be allocated in both the backend process pg_dump is
> using and pg_dump itself. Running the backup remotely won't require
> the latter to be allocated on the server.
>
>> I've been reducing my memory settings:
>>
>> maintenance_work_mem = 80MB
>> work_mem = 5MB
>> shared_buffers = 200MB
>
> You may also get it to work by reducing shared_buffers further.
> work_mem won't have any affect, neither will maintenance_work_mem.
>
> Failing that, the suggestions of more RAM and/or swap look good.
>
Adding more ram to the vm is the simplest option. I just seems a waste cuz of one backup.
Thanks all.
-Andy
From | Date | Subject | |
---|---|---|---|
Next Message | hmidi slim | 2018-07-04 15:32:36 | Re: Return select statement with sql case statement |
Previous Message | Ron | 2018-07-04 14:37:04 | Re: Return select statement with sql case statement |