Large update and disk usage

From: Steve Horn <steve(at)stevehorn(dot)cc>
To: pgsql-novice(at)postgresql(dot)org
Cc: mclark <mclark(at)n-focus(dot)com>, Greg Cordle <gcordle(at)n-focus(dot)com>
Subject: Large update and disk usage
Date: 2012-04-13 14:57:00
Message-ID: CAFLkBaXTcF=vi=MoCKf1_qn03QRjwCjvP1GBc_rX6wd2zOBCdg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

(Postgres 9.1 on CentOS)

Performing an update to two columns on a table with 40 million records, all
in one transaction.

The size of the table on disk (according to pg_relation_size) is 131GB. My
question is: when an update to all of these rows is performed, how much
disk space should I provision?

Also would be nice to understand how Postgres physically handles large
updates like this. (Does it create a temporary or global temporary table,
and then drop it when the transaction is committed?)

--
Steve Horn

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Andreas Kretschmer 2012-04-13 15:35:48 Re: Large update and disk usage
Previous Message Stephen Cook 2012-04-13 11:20:58 Re: Service start up error "The Service name is invalid net helpmsg 2185"