From: | Craig James <craig_james(at)emolecules(dot)com> |
---|---|
To: | Brian Herlihy <btherl(at)yahoo(dot)com(dot)au> |
Cc: | Postgresql Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Query plan issues - volatile tables |
Date: | 2009-06-04 16:04:12 |
Message-ID: | 4A27F07C.3050904@emolecules.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Brian Herlihy wrote:
> We have a problem with some of our query plans. One of our
>tables is quite volatile, but postgres always uses the last
>statistics snapshot from the last time it was analyzed for query
>planning. Is there a way to tell postgres that it should not
>trust the statistics for this table? Basically we want it to
>assume that there may be 0, 1 or 100,000 entries coming out from
>a query on that table at any time, and that it should not make
>any assumptions.>
I had a similar problem, and just changed my application to do an analyze either just before the query, or just after a major update to the table. Analyze is very fast, almost always a orders of magnitude faster than the time lost to a poor query plan.
Craig
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Wakeling | 2009-06-04 16:33:14 | GiST index performance |
Previous Message | Robert Haas | 2009-06-04 13:16:23 | Re: degenerate performance on one server of 3 |