From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Leonardo Francalanci <m_lists(at)yahoo(dot)it>, Boszormenyi Zoltan <zb(at)cybertec(dot)at>, pgsql-hackers Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: plan time of MASSIVE partitioning ... |
Date: | 2010-10-29 19:37:39 |
Message-ID: | 6681.1288381059@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> Excerpts from Tom Lane's message of vie oct 29 14:15:55 -0300 2010:
>> samples % symbol name
>> 447433 47.1553 get_tabstat_entry
> Is there a reason for keeping the pgstat info in plain lists?
Yeah: anything else loses for small numbers of tables per query, which
is the normal case. I'd guess you'd need ~100 tables touched in
a single transaction before a hashtable is even worth thinking about.
We could possibly adopt a solution similar to the planner's approach for
joinrels: start with a simple list, and switch over to hashing if the
list gets too long. But I'm really doubtful that it's worth the code
space. Even with Zoltan's 500-or-so-table case, this wasn't on the
radar screen.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Selena Deckelmann | 2010-10-29 19:50:28 | Tasks for Google Code-In |
Previous Message | Alvaro Herrera | 2010-10-29 19:11:50 | Re: plan time of MASSIVE partitioning ... |