Re: BUG #2225: Backend crash -- BIG table

From: Bruno Wolff III <bruno(at)wolff(dot)to>
To: Patrick Rotsaert <patrick(dot)rotsaert(at)arrowup(dot)be>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #2225: Backend crash -- BIG table
Date: 2006-02-04 22:06:47
Message-ID: 20060204220647.GB15063@wolff.to
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Fri, Feb 03, 2006 at 19:38:04 +0100,
Patrick Rotsaert <patrick(dot)rotsaert(at)arrowup(dot)be> wrote:
>
> I have 5.1GB of free disk space. If this is the cause, I have a
> problem... or is there another way to extract (and remove) duplicate rows?

How about processing a subset of the ids in one pass and then may make
multiple passes to check all of the ids. As long as you don't have to use
too small of chunks, this might work for you.

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message Alvaro Herrera 2006-02-04 22:21:57 Re: BUG #2236: extremely slow to get unescaped bytea data from db
Previous Message HOBY 2006-02-04 06:02:43 BUG #2238: Query failed: ERROR