Robert Haas wrote:
> On Sat, Oct 30, 2010 at 9:30 PM, Arturas Mazeika <mazeika(at)gmail(dot)com> wrote:
>> Thanks for the info, this explains a lot.
>> Yes, I am upgrading from the 32bit version to the 64bit one.
>> We have pretty large databases (some over 1 trillion of rows, and some
>> containing large documents in blobs.) Giving a bit more memory than 4GB
>> limit to Postgres was what we were long longing for. Postgres was able to
>> handle large datasets (I suppose it uses something like long long (64bit)
>> data type in C++) and I hoped naively that Postgres would be able to migrate
>> from one version to the other without too much trouble.
>> I tried to pg_dump one of the DBs with large documents. I failed with out of
>> memory error. I suppose it is rather hard to migrate in my case :-( Any
> Yikes, that's not good. How many tables do you have in your database?
> How many large objects? Any chance you can coax a stack trace out of
well the usually problem is that it is fairly easy to get large (several
hundred megabytes) large bytea objects into the database but upon
retrieval we tend to take up to 3x the size of the object as actual
memory consumption which causes us to hit all kind of limits(especially
on 32bit boxes).
We really need to look into reducing that or putting a more prominent
"don't use bytea for anything larger than say 50MByte)
In response to
pgsql-bugs by date
|Next:||From: Kevin Grittner||Date: 2010-11-09 14:47:19|
|Subject: Re: BUG #5735: pg_upgrade thinks that it did not start the old server|
|Previous:||From: Arturas Mazeika||Date: 2010-11-09 09:45:49|
|Subject: Re: BUG #5735: pg_upgrade thinks that it did not start the