Decibel! <decibel(at)decibel(dot)org> writes:
> On Nov 25, 2008, at 7:06 PM, Gregory Stark wrote:
>>> The thought occurs to me that we're looking at this from the wrong side of
>>> coin. I've never, ever seen query plan time pose a problem with Postgres,
>>> without using prepared statements.
>> I certainly have seen plan times be a problem. I wonder if you have too and
>> just didn't realize it. With a default_stats_target of 1000 you'll have
>> hundreds of kilobytes of data to slog through to plan a moderately complex
>> query with a few text columns. Forget about prepared queries, I've seen plan
>> times be unusable for ad-hoc interactive queries before.
> Can you provide any examples?
At the time I couldn't understand what the problem was. In retrospect I'm
certain this was the problem. I had a situation where just running EXPLAIN
took 5-10 seconds. I suspect I had some very large toasted arrays which were
having to be detoasted each time. IIRC I actually reloaded the database with
pg_dump and the problem went away.
> And no, I've never seen a system where a few milliseconds of plan time
> difference would pose a problem. I'm not saying they don't exist, only that I
> haven't seen them (including 2 years working as a consultant).
How many milliseconds does it take to read a few hundred kilobytes of toasted,
compressed data? These can easily be more data than the actual query is going
Now ideally this will all be cached but the larger the data set the less
likely it will be.
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!
In response to
pgsql-hackers by date
|Next:||From: Andrew Chernow||Date: 2008-11-26 19:53:42|
|Subject: Re: What's going on with pgfoundry?|
|Previous:||From: Decibel!||Date: 2008-11-26 19:32:53|
|Subject: Re: Column reordering in pg_dump|