Skip site navigation (1) Skip section navigation (2)

Re: Simple postgresql.conf wizard

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: Decibel! <decibel(at)decibel(dot)org>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Dann Corbit" <DCorbit(at)connx(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Bruce Momjian" <bruce(at)momjian(dot)us>, "Mark Wong" <markwkm(at)gmail(dot)com>, "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>, "Josh Berkus" <josh(at)agliodbs(dot)com>, "Greg Smith" <gsmith(at)gregsmith(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Simple postgresql.conf wizard
Date: 2008-11-26 19:47:05
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Decibel! <decibel(at)decibel(dot)org> writes:

> On Nov 25, 2008, at 7:06 PM, Gregory Stark wrote:
>>> The thought occurs to me that we're looking at this from the  wrong  side of
>>> the
>>> coin. I've never, ever seen query plan time pose a  problem with  Postgres,
>>> even
>>> without using prepared statements.
>> I certainly have seen plan times be a problem. I wonder if you have  too and
>> just didn't realize it. With a default_stats_target of 1000 you'll  have
>> hundreds of kilobytes of data to slog through to plan a moderately  complex
>> query with a few text columns. Forget about prepared queries, I've  seen plan
>> times be unusable for ad-hoc interactive queries before.
> Can you provide any examples?

At the time I couldn't understand what the problem was. In retrospect I'm
certain this was the problem. I had a situation where just running EXPLAIN
took 5-10 seconds. I suspect I had some very large toasted arrays which were
having to be detoasted each time. IIRC I actually reloaded the database with
pg_dump and the problem went away.

> And no, I've never seen a system where a few milliseconds of plan  time
> difference would pose a problem. I'm not saying they don't  exist, only that I
> haven't seen them (including 2 years working as a  consultant).

How many milliseconds does it take to read a few hundred kilobytes of toasted,
compressed data? These can easily be more data than the actual query is going
to read.

Now ideally this will all be cached but the larger the data set the less
likely it will be.

  Gregory Stark
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

In response to

pgsql-hackers by date

Next:From: Andrew ChernowDate: 2008-11-26 19:53:42
Subject: Re: What's going on with pgfoundry?
Previous:From: Decibel!Date: 2008-11-26 19:32:53
Subject: Re: Column reordering in pg_dump

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group