From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: SeqScan costs |
Date: | 2008-08-12 20:54:43 |
Message-ID: | 1218574483.5343.180.camel@ebony.2ndQuadrant |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
> Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> > Proposal: Make the first block of a seq scan cost random_page_cost, then
> > after that every additional block costs seq_page_cost.
>
> This is only going to matter for a table of 1 block (or at least very
> few blocks), and for such a table it's highly likely that it's in RAM
> anyway. So I'm unconvinced that the proposed change represents a
> better model of reality.
The access cost should be the same for a 1 block table, whether its on
disk or in memory.
> Perhaps more to the point, you haven't provided any actual evidence
> that this is a better approach. I'm disinclined to tinker with the
> fundamental cost models on the basis of handwaving.
I've written a simple test suite
psql -f seq.sql -v numblocks=x -v pkval=y -v filler=z
to investigate various costs and elapsed times.
AFAICS the cost cross-over is much higher than the actual elapsed time
cross-over for both narrow and wide tables.
Thats why using SET enable_seqscan=off helps performance in many cases,
or why people reduce random_page_cost to force index selection.
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
Attachment | Content-Type | Size |
---|---|---|
seq.sql | text/x-sql | 998 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Markus Wanner | 2008-08-12 21:28:15 | Re: Transaction-controlled robustness for replication |
Previous Message | Robert Hodges | 2008-08-12 20:36:56 | Re: Transaction-controlled robustness for replication |