From: | Sebastien Boisvert <sebastienboisvert(at)yahoo(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Problems backing up |
Date: | 2010-04-07 17:10:14 |
Message-ID: | 496018.97982.qm@web34305.mail.mud.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
----- Original Message ----
> From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
>> [ COPY fails to dump a 138MB bytea column ]
> I wonder whether you are doing anything that exacerbates
> the memory requirement, for instance by forcing an encoding conversion to
> something other than the database's server_encoding.
Our backups are done with the "-F c" (in addition to the normal user/host/port options). As far as I know that shouldn't be
triggering any type conversions (which is UTF8 all-around). If you still think that might be the case, is there a way to force
it _not_ to do the conversion?
The restores are done using the same option. We've recently hit a similar problem where restoring a backup fails with the
same type of out-of-memory error. Backing up that database works fine however, as does reading all data from it.
__________________________________________________________________
Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2010-04-07 17:12:18 | Re: count with high allocation |
Previous Message | paulo matadr | 2010-04-07 17:06:13 | Res: count with high allocation |