From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | The Hermit Hacker <scrappy(at)hub(dot)org> |
Cc: | hackers(at)postgreSQL(dot)org |
Subject: | Re: AW: [HACKERS] Really slow query on 6.4.2 |
Date: | 1999-03-25 19:34:29 |
Message-ID: | 15024.922390469@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
The Hermit Hacker <scrappy(at)hub(dot)org> writes:
> I'm not sure what is all contained in the stats, but the easiest one, I
> think, to have done automagically is table sizes...add a tuple, update the
> table of number of rows automatically. If that numbers gets "off", at
> least it will be more reasonable then not doing anything...no?
The number of tuples is definitely the most important stat; updating it
automatically would make the optimizer work better. The stuff in
pg_statistics is not nearly as important.
The only objection I can think of to auto-updating reltuples is that
it'd mean additional computation (to access and rewrite the pg_class
entry) and additional disk I/O (to write back pg_class) for every INSERT
and DELETE. There's also a potential problem of multiple backends all
trying to write pg_class and being delayed or even deadlocked because of
it. (Perhaps the MVCC code will help here.)
I'm not convinced that accurate stats are worth that cost, but I don't
know how big the cost would be anyway. Anyone have a feel for it?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Clark Evans | 1999-03-25 20:14:08 | Re: [HACKERS] PostgreSQL LOGO (was: Developers Globe (FINAL)) |
Previous Message | Tom Lane | 1999-03-25 19:25:29 | Re: [HACKERS] 64-bit hashjoins |