Re: When adding millions of rows at once, getting out of disk space errors

From: Mike Christensen <imaudi(at)comcast(dot)net>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: Grzegorz Jaśkiewicz <gryzman(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: When adding millions of rows at once, getting out of disk space errors
Date: 2009-02-19 10:58:02
Message-ID: 499D3B3A.1080202@comcast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I would assume if the account the service is running under has limited
disk space, it won't really matter what OS you're running under.
Postgres will throw an "out of disk space" error.

The problem for me is I was in the middle of a transaction which
inserted about 50,000 rows into a table, and that transaction was
pending in the transaction log. For some reason, the service didn't
quit property and when it started back up, it began a repair process.
When it got to the pending transactions (I think about 16 megs worth of
data) it just hung with no useful errors or anything outputted to the
log or stderr. I think this needs to be fixed!

First off, when Postgres starts and sees that your database was not
closed properly, it should tell you there's pending transactions and ask
if you want to dump them or try to process them (or maybe save them for
later). If you process them, there should be clear status and you
should know what's going on. It's very possible the service would have
/eventually/ started up for me had I waited long enough (I tried 5-6
hrs, with no logs, output, or change in memory consumption; thus I
assumed it was dead)..

Also, if there are errors processing the transaction log, rather than
just error out and exit, it should at least tell you to run
pg_resetxlog.exe. Another idea is if I just delete everything in the
pg_xlog directory, it should recover and boot up without any errors
rather than complaining about missing checkpoint files.

No big problem, the good news is Postgres never once actually corrupted
my data and I didn't lose a single byte (well, except for the pending
transactions that I didn't care about). More good news is I learned a
lot and even got familiar with some of the source code and debugging,
and it's made me finally get around to writing an automatic backup
script that runs every midnight.

Mike

Scott Marlowe wrote:
> On Thu, Feb 19, 2009 at 3:11 AM, Mike Christensen <imaudi(at)comcast(dot)net> wrote:
>
>> Actually I'm writing emails on my Mac <g>
>>
>> However, the Postgres service is running on my Windows 2003 machine..
>>
>> The disk space issue turned out to be a disk quota which was easy to solve.
>> Unfortunately, the fact it crashed Postgres and with a massive transaction
>> log left the server in a state where it wouldn't boot anymore. I was
>> eventually able to fix it by resetting the transaction log manually. I'm
>> hoping future versions of Postgres will handle this scenario a lot better..
>>
>
> They're certainly supposed to. I've had no such problems running out
> of space on linux in the past. I wonder if it's a windows thing.
>
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Howard Cole 2009-02-19 11:10:55 Re: Pet Peeves?
Previous Message A. Kretschmer 2009-02-19 10:50:32 Re: Accessing array elements in a FOR PL/pgsql loop