Skip site navigation (1) Skip section navigation (2)

The science of optimization in practical terms?

From: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: The science of optimization in practical terms?
Date: 2009-02-13 00:06:31
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers

I was helping a customer today with what is becoming a common theme with
a lot of work we do. Basically, "It was working fine until recently."
Now 90% of the time it is as simple as running an ANALYZE VERBOSE and
picking apart relations that aren't being maintained properly and adjust
autovacuum or vacuum appropriately. If it isn't that, it is usually
something like increasing effective_cache_size, or

However, in recent times I have found that increasing cpu_tuple_cost,
cpu_operator_cost and cpu_index_tuple_cost to be very useful. This is
always in the scenario of, "queries were running fine for months and
then all of a sudden, they are not". It is also always on systems that
we are already maintaining and thus (in theory) are in good shape.

So my question is, what is the science in practical terms behind those
parameters? Normally I would just accept it as another PostgreSQL
idiosyncrasy but the performance differences I am talking about are
large. After changing cpu_tuple_cost and cpu_operator_cost today to 0.5
I decreased two queries from 10 seconds and 15 seconds to 2 seconds and
~900 ms respectively.


Joshua D. Drake

PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
   Consulting, Development, Support, Training
   503-667-4564 -
   The PostgreSQL Company, serving since 1997


pgsql-hackers by date

Next:From: Tom LaneDate: 2009-02-13 00:19:06
Subject: Re: Missing files after make install ?
Previous:From: David FetterDate: 2009-02-12 23:25:13
Subject: Re: pg_restore --multi-thread

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group