From: | Michael Richards <miker(at)scifair(dot)acadiau(dot)ca> |
---|---|
To: | bugs(at)postgesql(dot)org, hackers(at)postgresql(dot)org |
Cc: | questions(at)postgresql(dot)org |
Subject: | Maybe a Vacuum bug in 6.3.2 |
Date: | 1998-05-10 02:37:29 |
Message-ID: | Pine.BSF.3.96.980509232524.14511B-100000@scifair.acadiau.ca |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-hackers |
Hi...
Gigantic table woes again... I get
sc=> vacuum test_detail;
FATAL 1: palloc failure: memory exhausted
This is a very simple table too:
| word_id | int4 |4 |
| url_id | int4 |4 |
| word_count | int2 |2 |
while vacuuming a rather big table:
sc=> select count(*) from test_detail;
Field| Value
-- RECORD 0 --
count| 78444613
(1 row)
There is lots of free space on that drive:
/dev/sd1s1e 8854584 6547824 1598400 80% /scdb
The test_detail table is in a few files too...
-rw------- 1 postgres postgres 2147483648 May 9 23:28 test_detail
-rw------- 1 postgres postgres 2147483648 May 9 23:23 test_detail.1
-rw------- 1 postgres postgres 949608448 May 9 23:28 test_detail.2
I am not running out of swap space either...
under top the backend just keeps growing.
492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres
when it hit about 20 megs, it craps out. Swap space is 0% used, and I am
not even convinced this is using all 128 megs of ram either. Could
something like memory fragementation be an issue?
Does anyone have any ideas other than buying a gig of ram?
From | Date | Subject | |
---|---|---|---|
Next Message | The Web Administrator | 1998-05-11 14:32:27 | Trying to communicate to remote host via PG |
Previous Message | Mario Filipe | 1998-05-04 09:34:07 |
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Richards | 1998-05-10 04:33:10 | A possible postgres 6.3.2 bug |
Previous Message | Thomas G. Lockhart | 1998-05-10 02:13:48 | Re: [HACKERS] Automatic type conversion |