From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com>, Greg Stark <gsstark(at)mit(dot)edu>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: remove flatfiles.c |
Date: | 2009-09-02 02:56:04 |
Message-ID: | 603c8f070909011956p62d0cc7ct55af7d1b16f2ddb6@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Sep 1, 2009 at 9:29 PM, Alvaro
Herrera<alvherre(at)commandprompt(dot)com> wrote:
> Ron Mayer wrote:
>> Greg Stark wrote:
>> >
>> > That's what I want to believe. But picture if you have, say a
>> > 1-terabyte table which is 50% dead tuples and you don't have a spare
>> > 1-terabytes to rewrite the whole table.
>>
>> Could one hypothetically do
>> update bigtable set pk = pk where ctid in (select ctid from bigtable order by ctid desc limit 100);
>> vacuum;
>> and repeat until max(ctid) is small enough?
>
> I remember Hannu Krosing said they used something like that to shrink
> really bloated tables. Maybe we should try to explicitely support a
> mechanism that worked in that fashion. I think I tried it at some point
> and found that the problem with it was that ctid was too limited in what
> it was able to do.
I think a way to incrementally shrink large tables would be enormously
beneficial. Maybe vacuum could try to do a bit of that each time it
runs.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2009-09-02 02:58:46 | Re: remove flatfiles.c |
Previous Message | Robert Haas | 2009-09-02 02:55:14 | Re: remove flatfiles.c |