Re: 8.3rc1 Out of memory when performing update

From: Magnus Hagander <magnus(at)hagander(dot)net>
To: "Roberts, Jon" <Jon(dot)Roberts(at)asurion(dot)com>
Cc: cgallant(at)gmail(dot)com, pgsql-performance(at)postgresql(dot)org
Subject: Re: 8.3rc1 Out of memory when performing update
Date: 2008-01-25 18:45:43
Message-ID: 479A2E57.3060405@hagander.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Roberts, Jon wrote:
>> Subject: Re: [PERFORM] 8.3rc1 Out of memory when performing update
>>
>>>> A simple update query, over roughly 17 million rows, populating a
>>>> newly added column in a table, resulted in an out of memory error
>>>> when the process memory usage reached 2GB. Could this be due to a
>>>> poor choice of some configuration parameter, or is there a limit
> on
>>>> how many rows I can update in a single statement?
>>>>
>>> I believe that it is plataform problem. Because on *nix this
> limit
>>> don't occur. But I don't specialist Windows.
>> On most Windows Servers(except for database edition and a few other
>> variants), 2Gb is the most you can address to a single
>> process without booting the machine with a special parameter called
> \3G
>> which will allow for allocating up to 3Gb per process. That is the
>> limit unless you get special versions of windows server 2003 as far as
> I
>> know. If you do a google search on \3G with windows you will find
> what
>> I am refering too.
>
> Windows 32 bit is limited to 2 or 3 GB as you state but 64 bit Windows
> isn't. 32 bit Linux has similar limits too.

Well, PostgreSQL on Windows is a 32-bit binary, so the limit applies to
this case.

//Magnus

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message growse 2008-01-25 23:27:05 How do I bulk insert to a table without affecting read performance on that table?
Previous Message Roberts, Jon 2008-01-25 18:36:28 Re: 8.3rc1 Out of memory when performing update