Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> writes:
> well the usually problem is that it is fairly easy to get large (several
> hundred megabytes) large bytea objects into the database but upon
> retrieval we tend to take up to 3x the size of the object as actual
> memory consumption which causes us to hit all kind of limits(especially
> on 32bit boxes).
It occurs to me that one place that might be unnecessarily eating
backend memory during pg_dump is encoding conversion during COPY OUT.
Make sure that pg_dump isn't asking for a conversion to some other
encoding than what the database uses. I think the default is to avoid
conversion, so this might be a dead end --- but if for instance you
had PGCLIENTENCODING set in the client environment, it could bite you.
regards, tom lane
In response to
pgsql-bugs by date
|Next:||From: Bruce Momjian||Date: 2010-11-10 04:23:50|
|Subject: Re: BUG #5735: pg_upgrade thinks that it did not start
the old server|
|Previous:||From: Kevin Grittner||Date: 2010-11-09 14:47:19|
|Subject: Re: BUG #5735: pg_upgrade thinks that it did not start the old server|