From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Patrick Rotsaert <patrick(dot)rotsaert(at)arrowup(dot)be> |
Cc: | Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>, pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #2225: Backend crash -- BIG table |
Date: | 2006-02-03 18:50:31 |
Message-ID: | 28099.1138992631@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Patrick Rotsaert <patrick(dot)rotsaert(at)arrowup(dot)be> writes:
> I did a vacuum analyze, now the explain gives different results.
> pointspp=# explain select trid, count(*) from pptran group by trid
> having count(*) > 1;
> QUERY PLAN
> --------------------------------------------------------------------------------
> GroupAggregate (cost=9842885.29..10840821.57 rows=36288592 width=18)
> Filter: (count(*) > 1)
> -> Sort (cost=9842885.29..9933606.77 rows=36288592 width=18)
> Sort Key: trid
> -> Seq Scan on pptran (cost=0.00..1039725.92 rows=36288592
> width=18)
> (5 rows)
OK that looks more reasonable.
> pointspp=# select trid, count(*) from pptran group by trid having
> count(*) > 1;
> ERROR: could not write block 661572 of temporary file: No space left on
> device
> HINT: Perhaps out of disk space?
> I have 5.1GB of free disk space. If this is the cause, I have a
> problem... or is there another way to extract (and remove) duplicate rows?
Time to buy more disk :-(
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kai Ronan | 2006-02-03 22:13:19 | BUG #2236: extremely slow to get unescaped bytea data from db |
Previous Message | Patrick Rotsaert | 2006-02-03 18:38:04 | Re: BUG #2225: Backend crash -- BIG table |