From: | Ed Loehr <eloehr(at)austin(dot)rr(dot)com> |
---|---|
To: | pghackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | vacuum analyze feedback |
Date: | 2000-05-25 15:51:23 |
Message-ID: | 392D4BFB.4ADE228C@austin.rr.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I know this topic has been rehashed a million times, but I just wanted to
add one datapoint. I have a database (150 tables, less than 20K tuples
in any one table) which I 'vacuum analyze'*HOURLY*, blocking all access,
and I still see frequent situations where my query times bloat by roughly
300% (4 times slower) in the intervening time between vacuums. All this
is to say that I think a more strategic implementation of the
functionality of vacuum analyze (specifically, non-batched, automated,
on-the-fly vacuuming/analyzing) would be a major "value add". I haven't
educated myself as to the history of it, but I do wonder why the
performance focus is not on this. I'd imagine it would be a performance
hit (which argues for making it optional), but I'd gladly take a 10%
performance hit over the current highly undesireable degradation. You
could do a whole lotta optimization on the planner/parser/executor and
not get close to the end-user-perceptible gains from fixing this
problem...
Regards,
Ed Loehr
From | Date | Subject | |
---|---|---|---|
Next Message | Zeugswetter Andreas SB | 2000-05-25 15:56:15 | AW: AW: Berkeley DB... |
Previous Message | Zeugswetter Andreas SB | 2000-05-25 15:51:13 | AW: More Performance |