Re: [ patch ] pg_dump: new --custom-fetch-table and --custom-fetch-value parameters

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andrea Urbani <matfanjol(at)mail(dot)com>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [ patch ] pg_dump: new --custom-fetch-table and --custom-fetch-value parameters
Date: 2017-02-13 15:42:25
Message-ID: CA+TgmoaF+oxaWuRS6g=Y_4tbgy1bh89OUX7Akf2QQdHsXpQofg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Feb 11, 2017 at 9:56 AM, Andrea Urbani <matfanjol(at)mail(dot)com> wrote:
> I'm a beginner here... anyway I try to share my ideas.
>
> My situation is changed in a worst state: I'm no more able to make a pg_dump neither with my custom fetch value (I have tried "1" as value = one row at the time) neither without the "--column-inserts":
>
> pg_dump: Dumping the contents of table "tDocumentsFiles" failed: PQgetResult() failed.
> pg_dump: Error message from server: ERROR: out of memory
> DETAIL: Failed on request of size 1073741823.
> pg_dump: The command was: COPY public."tDocumentsFiles" ("ID_Document", "ID_File", "Name", "FileName", "Link", "Note", "Picture", "Content", "FileSize", "FileDateTime", "DrugBox", "DrugPicture", "DrugInstructions") TO stdout;
>
> I don't know if the Kyotaro Horiguchi patch will solve this, because, again, I'm not able to get neither one single row.

Yeah, if you can't fetch even one row, limiting the fetch size won't
help. But why is that failing? A single 1GB allocation should be
fine on most modern servers. I guess the fact that you're using a
32-bit build of PostgreSQL is probably a big part of the problem;
there is probably only 2GB of available address space and you're
trying to find a single, contiguous 1GB chunk. If you switch to using
a 64-bit PostgreSQL things will probably get a lot better for you,
unless the server's actual memory is also very small.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2017-02-13 15:52:41 Re: WIP: About CMake v2
Previous Message Amit Kapila 2017-02-13 15:22:09 Re: Write Ahead Logging for Hash Indexes