From: | Karl Wright <kwright(at)metacarta(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Francisco Reyes <lists(at)stringsutils(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance query about large tables, lots of concurrent access |
Date: | 2007-06-20 18:03:28 |
Message-ID: | 46796BF0.8010302@metacarta.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Alvaro Herrera wrote:
> Karl Wright wrote:
>> Alvaro Herrera wrote:
>>> Karl Wright wrote:
>>>
>>>> (b) the performance of individual queries had already degraded
>>>> significantly in the same manner as what I'd seen before.
>>> You didn't answer whether you had smaller, more frequently updated
>>> tables that need more vacuuming. This comment makes me think you do. I
>>> think what you should be looking at is whether you can forget vacuuming
>>> the whole database in one go, and make it more granular.
>> I am afraid that I did answer this. My largest tables are the ones
>> continually being updated. The smaller ones are updated only infrequently.
>
> Can you afford to vacuum them in parallel?
>
Hmm, interesting question. If VACUUM is disk limited then it wouldn't
help, probably, unless I moved various tables to different disks
somehow. Let me think about whether that might be possible.
Karl
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2007-06-20 18:06:28 | Re: Performance query about large tables, lots of concurrent access |
Previous Message | Karl Wright | 2007-06-20 18:01:34 | Re: Performance query about large tables, lots of concurrent access |