Re: Problem w/ dumping huge table and no disk space

From: David Ford <david(at)blue-labs(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Problem w/ dumping huge table and no disk space
Date: 2001-09-07 22:29:08
Message-ID: 3B994A34.9090100@blue-labs.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

$ postgres --version
postgres (PostgreSQL) 7.1beta5

1) If I run pg_dump, it runs for about 20 minutes the aborts abruptly w/
out of memory err, pg_dump is killed by the kernel and postgres spews
pipe errors until it reaches the end of the table or I kill it. It
starts with ~100megs of regular RAM free and has 300megs of swap.

2) If I try to do a 'delete from ...' query, it runs for about 20
minutes and all of a sudden has 4 megs of disk space free and pg dies.
It starts with ~500megs disk space free.

So in either situation I'm kind of screwed. The new machine is running
7.2devel, I doubt I could copy the data directory.

My WAL logs is set to 8, 8*16 is 128megs, no?

Tom Lane wrote:

>David Ford <david(at)blue-labs(dot)org> writes:
>
>>I have a 10million+ row table and I've only got a couple hundred megs
>>left. I can't delete any rows, pg runs out of disk space and crashes.
>>
>
>What is running out of disk space, exactly?
>
>If the problem is WAL log growth, an update to 7.1.3 might help
>(... you didn't say which version you're using).
>
>If the problem is lack of space for the pg_dump output file, I think you
>have little choice except to arrange for the dump to go to another
>device (maybe dump it across NFS, or to a tape, or something).
>
> regards, tom lane
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Frank Miles 2001-09-07 22:38:09 recursive text construction in plpgsql?
Previous Message Chris Bowlby 2001-09-07 22:27:13 Re: Great Bridge ceases operations