On Fri, 30 May 2003, Yusuf wrote:
> In the application, that I'm working on, I have a query that'll be a lot
> 60% faster if I disable sequential scan forcing it to you my index.
> Is it bad practice to disable sequential scan ( set
> enable_seqscan=false), run my query then enable sequential scan,
> whenever I'm running this query? Why?
setting seqscan to off is more of a troubleshooting tool than a tuning
tool, albeit sometimes it's the only tuning tool that MIGHT work.
Once you've determined that the database is picking the wrong plan when
you turn seqscan back on, you need to figure out how to convince the
database to use the right plan more often.
The best parameters to change and see how they affect this are the
*cost* parameters and the effective cache size.
show all; will show them to you, the ones we're interested in are these:
NOTICE: effective_cache_size is 100000
NOTICE: random_page_cost is 1
NOTICE: cpu_tuple_cost is 0.01
NOTICE: cpu_index_tuple_cost is 0.0001
NOTICE: cpu_operator_cost is 0.0025
To change them for one session, just use the set command. To make the
changes the permanent default, edit the $PGDATA/postgresql.conf file.
effective_cache_size tells the planner about how big the kernel's file
level cache is. On my machine it's about 800 meg. It's measured in 8k
blocks, so 100,000 * 8k ~ 800 meg. The smaller this is, the more likely
the database will have to access the hard drive, and therefore the more
likely it will pick a seqscan if the other numbers point to it.
random_page_cost tells the planner how much more a random page access
costs. The default is 4. Most systems seem to work well with numbers
from 1 to 2.
lowering the cpu_index_tuple_cost also favors index scans.
In response to
pgsql-performance by date
|Next:||From: Tomas Szepe||Date: 2003-05-30 22:59:39|
|Subject: Re: db growing out of proportion|
|Previous:||From: Yusuf||Date: 2003-05-30 20:33:07|
|Subject: Enabling and Disabling Sequencial Scan|