Stephen Frost <sfrost(at)snowman(dot)net> writes:
> What I think is happening here is that PG is pushing down that filter
> (not typically a bad thing..), but with that condition, it's going to
> scan the index until it finds a match for that filter before returning
> back up only to have that result cut out due to the limit.
Yeah, it's spending quite a lot of time finding the first matching row
in each child table. I'm curious why that is though; are the child
tables not set up with nonoverlapping firstloadtime ranges?
> What would be great is if PG would realize that the CHECK constraints
> prevent earlier records from being in these earlier tables,
The explain shows that that isn't the case, because it *is* finding at
least one candidate row in each table. It's just running quite far into
the firstloadtime sequence to do it.
If you're stuck with this table arrangement, one thing that would help
is a two-column index on (host_guid, firstloadtime) on each child table.
That would match the search condition exactly, and so reduce the cost
to find the first matching row to nearly nil. Whether this query's
speed is important enough to justify maintaining such an index is a
question I can't answer for you.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Stephen Frost||Date: 2011-09-26 00:34:28|
|Subject: Re: Query optimization using order by and limit|
|Previous:||From: Mark Wong||Date: 2011-09-24 15:45:51|
|Subject: Re: DBT-5 & Postgres 9.0.3|