"Stephen Denne" <Stephen(dot)Denne(at)datamail(dot)co(dot)nz> writes:
> A simple update query, over roughly 17 million rows, populating a newly added column in a table, resulted in an out of memory error when the process memory usage reached 2GB. Could this be due to a poor choice of some configuration parameter, or is there a limit on how many rows I can update in a single statement?
Do you have any triggers or foreign keys on that table? For that
matter, let's see its whole schema definition.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Stephen Denne||Date: 2008-01-25 03:20:49|
|Subject: Re: 8.3rc1 Out of memory when performing update |
|Previous:||From: Tom Lane||Date: 2008-01-25 02:27:01|
|Subject: Re: planner chooses unoptimal plan on joins with complex key |