From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Vincenzo Rome ano <vincenzo(dot)romano(at)notorand(dot)it> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: On Scalability |
Date: | 2010-10-08 16:10:07 |
Message-ID: | AANLkTimvi1ou4mSbNzz5j10Er7vO7gscNUq9aMetSqzL@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On Fri, Oct 8, 2010 at 3:20 AM, Vincenzo Romano
<vincenzo(dot)romano(at)notorand(dot)it> wrote:
> Do the same conclusions apply to partial indexes?
> I mean, if I have a large number (n>=100 or n>=1000) of partial indexes
> on a single very large table (m>=10**12), how good is the planner to choose the
> right indexes to plan a query?
> Has also this algorithm superlinear complexity?
No, it's also linear. It needs to look at every partial index and
check to see whether it's a candidate for your query. Actually that's
true for regular indexes as well but it has the extra step of proving
that the partial index includes all the rows your query needs which is
not a cheap step.
The size of the table isn't relevant though, except inasmuch as the
savings when actually running the query will be larger for larger
tables so it may be worth spending more time planning queries on large
tables.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2010-10-08 16:23:48 | Re: Sync Replication with transaction-controlled durability |
Previous Message | Gabriele Bartolini | 2010-10-08 16:05:55 | Italian PGDay 2010, Call for papers |
From | Date | Subject | |
---|---|---|---|
Next Message | Ben Chobot | 2010-10-08 18:08:21 | Re: BBU Cache vs. spindles |
Previous Message | Vincenzo Romano | 2010-10-08 10:20:14 | Re: On Scalability |