|From:||Manfred Koizar <mkoi-pg(at)aon(dot)at>|
|To:||"Mooney, Ryan" <ryan(dot)mooney(at)pnl(dot)gov>|
|Subject:||Re: Large DB|
|Views:||Raw Message | Whole Thread | Download mbox|
On Wed, 31 Mar 2004 10:08:26 -0800, "Mooney, Ryan" <ryan(dot)mooney(at)pnl(dot)gov>
>I haven't run ANALYZE on this table in a while. After about 50-60M rows
>it didn't seem to change the query plan at all and since there were
>never any deletes/updates it seemed like it wasn't making much/any
>difference (should have been no pages to reclaim).
Reclaiming pages is not the job of ANALYSE, VACUUM does this.
> That may be an
>invalid assumption though.
Might be a valid assumption as well -- if you're lucky. But do you want
to depend on luck? Eg. 75% of the today's rows contain timestamps that
are greater than what the planner believes to be the maximum.
No VACCUM, no ANALYSE, no REINDEX. This explains why the planner thinks
there are only 4M pages, which gives 640 bytes/row if there were 50M
rows at that time. OTOH the EXPLAIN shows 290M rows for the seq scan.
Something doesn't fit together here.
Hackers, what could update reltuples, but not relpages?
Or, Ryan, is it possible that you already had 290M rows when you ran
ANALYSE and you have more than 1G rows today?
BTW, ANALYSE is basically a constant time operation.
>Here is an explain on the query:
>=> explain select point, avg(pvalue) as avg from tp3 where host in
This tells us one half of the story.
EXPLAIN ANALYSE SELECT ...
would tell us the other half, too.
|Next Message||Christophe Musielak||2004-04-01 11:12:56||7.2.3-7.4.2 migration|
|Previous Message||Oleg Bartunov||2004-04-01 10:02:07||Re: Wich hardware suits best for large full-text indexed|
|Next Message||Fabien COELHO||2004-04-01 13:47:16||Re: with vs without oids in pg_catalog.*|
|Previous Message||Tony Reina||2004-04-01 08:44:03||pgmex, a MATLAB interface for PostgreSQL|