From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Brian Cox <brian(dot)cox(at)ca(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: error updating a very large table |
Date: | 2009-04-15 16:57:17 |
Message-ID: | 1239814637.23905.44.camel@ebony.2ndQuadrant |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, 2009-04-15 at 09:51 -0400, Tom Lane wrote:
> Brian Cox <brian(dot)cox(at)ca(dot)com> writes:
> > I changed the logic to update the table in 1M row batches. However,
> > after 159M rows, I get:
>
> > ERROR: could not extend relation 1663/16385/19505: wrote only 4096 of
> > 8192 bytes at block 7621407
>
> You're out of disk space.
>
> > A df run on this machine shows plenty of space:
>
> Per-user quota restriction, perhaps?
>
> I'm also wondering about temporary files, although I suppose 100G worth
> of temp files is a bit much for this query. But you need to watch df
> while the query is happening, rather than suppose that an after-the-fact
> reading means anything.
Anytime we get an out of space error we will be in the same situation.
When we get this error, we should
* summary of current temp file usage
* df (if possible on OS)
Otherwise we'll always be wondering what caused the error.
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Figueiredo Jr. | 2009-04-15 18:10:13 | Re: need information |
Previous Message | Joshua D. Drake | 2009-04-15 15:55:31 | Re: need information |