Re: error updating a very large table

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Brian Cox <brian(dot)cox(at)ca(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: error updating a very large table
Date: 2009-04-15 13:51:37
Message-ID: 9060.1239803497@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Brian Cox <brian(dot)cox(at)ca(dot)com> writes:
> I changed the logic to update the table in 1M row batches. However,
> after 159M rows, I get:

> ERROR: could not extend relation 1663/16385/19505: wrote only 4096 of
> 8192 bytes at block 7621407

You're out of disk space.

> A df run on this machine shows plenty of space:

Per-user quota restriction, perhaps?

I'm also wondering about temporary files, although I suppose 100G worth
of temp files is a bit much for this query. But you need to watch df
while the query is happening, rather than suppose that an after-the-fact
reading means anything.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Albe Laurenz 2009-04-15 15:15:43 Re: need information
Previous Message Matthew Wakeling 2009-04-15 11:57:57 Re: INSERT times - same storage space but more fields -> much slower inserts