When to bump up statistics?

From: Dawid Kuroczko <qnex42(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: When to bump up statistics?
Date: 2004-11-19 13:59:48
Message-ID: 758d5e7f04111905594cf9c9f8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

ALTER TABLE foo ALTER COLUMN bar SET STATISTICS n; .....

I wonder what are the implications of using this statement,
I know by using, say n=100, ANALYZE will take more time,
pg_statistics will be bigger, planner will take longer time,
on the other hand it will make better decisions... Etc, etc.

I wonder however when it is most uselful to bump it up.
Please tell me what you think about it:

Is bumping up statistics is only useful for indexed columns?

When is it most useful/benefitial to bump them up:

1) huge table with huge number of distinct values (_almost_
unique ;))

2) huge table with relatively equally distributed values
(like each value is in between, say, 30-50 rows).

3) huge table with unequally distributed values (some
values are in 1-5 rows, some are in 1000-5000 rows).

4) huge table with small number values (around ~100
distinct values, equally or uneqally distributed).

5) boolean column.

I think SET STATISTICS 100 is very useful for case with
unequally distributed values, but I wonder what about
the other cases. And as a side note -- what are the
reasonable bounds for statistics (between 10 and 100?)

What are the runtime implications of setting statistics
too large -- how much can it affect queries?

And finally -- how other RDBMS and RDBM-likes deal
with this issue? :)

Regards,
Dawid

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jan Wieck 2004-11-19 14:34:14 Re: sort_mem affect on inserts?
Previous Message Sean Chittenden 2004-11-18 21:10:28 Re: memcached and PostgreSQL