(Postgres 9.1 on CentOS)
Performing an update to two columns on a table with 40 million records, all
in one transaction.
The size of the table on disk (according to pg_relation_size) is 131GB. My
question is: when an update to all of these rows is performed, how much
disk space should I provision?
Also would be nice to understand how Postgres physically handles large
updates like this. (Does it create a temporary or global temporary table,
and then drop it when the transaction is committed?)
--
Steve Horn