Bruce Momjian <bruce(at)momjian(dot)us> writes:
> On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:
>> Remember also that "enable_seqscan=off" merely adds 1e10 to the
>> estimated cost of seqscans. For sufficiently large tables this is not
>> exactly a hard disable, just a thumb on the scales. But I don't know
>> what your definition of "extremely large indexes" is.
> Wow, do we need to bump up that value based on larger modern hardware?
I'm disinclined to bump it up very much. If it's more than about 1e16,
ordinary cost contributions would disappear into float8 roundoff error,
causing the planner to be making choices that are utterly random except
for minimizing the number of seqscans. Even at 1e14 or so you'd be
losing a lot of finer-grain distinctions. What we want is for the
behavior to be "minimize the number of seqscans but plan normally
otherwise", so those other cost contributions are still important.
Anyway, at this point we're merely speculating about what's behind
Robert's report --- I'd want to see some concrete real-world examples
before changing anything.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Horst Dehmer||Date: 2013-01-15 23:44:29|
|Subject: Re: Insert performance for large transaction with multiple COPY FROM|
|Previous:||From: Bruce Momjian||Date: 2013-01-15 19:46:39|
|Subject: Re: [PERFORM] Slow query: bitmap scan troubles|
pgsql-hackers by date
|Next:||From: Andrew Dunstan||Date: 2013-01-15 20:17:10|
|Subject: Re: json api WIP patch|
|Previous:||From: Robert Haas||Date: 2013-01-15 20:09:38|
|Subject: Re: erroneous restore into pg_catalog schema|