Skip site navigation (1) Skip section navigation (2)

Re: SQL With Dates

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Rafael Domiciano <rafael(dot)domiciano(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: SQL With Dates
Date: 2009-04-21 15:57:32
Message-ID: 603c8f070904210857h4e1c74pc7b77e1b9d384789@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On Mon, Apr 20, 2009 at 9:55 AM, Rafael Domiciano
<rafael(dot)domiciano(at)gmail(dot)com> wrote:
> Hello People,
>
> I have initiated a work to review the sqls of our internal software.
> Lot of them he problem are about sql logic, or join with table unecessary,
> and so on.
> But software has lot of sql with date, doing thinks like:
> [..]
>   date >= '2009-04-01' AND
>   date <= '2009-04-15'
> [..]
>
> Redoing the SQL with fix date (date = '2009-04-01') o cost in explain always
> still about 200 or less. But with a period the cost is high, about 6000 or
> more.
>
> Select is using Index and the date is using index too.
>
> There is some way to use date period with less cost?

If you have an actual performance problem (as opposed to a big number
in EXPLAIN), then it's possible that the planner isn't estimating the
number of rows that will be in that range very accurately.  In that
case, you might need to increase the statistics target for that
column, or your default_statistics_target.

In 8.3, the default default_statistics_target = 10.  In 8.4, it will
be 100, so you might try that for a starting point.  But date columns
can sometimes have highly skewed data, so you might find that you need
an even higher value for that particular column.  I wouldn't recommend
raising the database-wide setting above 100 though (though I know some
people have used 200 or 400 without too much pain, especially on
Really Big Databases where longer planning time isn't a big deal
because the execution times are measured in minutes - it doesn't sound
like that's your situation though).

The first thing, to do, is see how fast the query actually runs.  Try
setting \timing in psql and running the query to see how long it
actually takes.  If it's fast enough, you're done.  If not, run
EXPLAIN ANALYZE and compare the estimated row counts to t he actual
row counts.  If they're pretty close, you're out of luck - as others
have already said, TANSTAAFL.  If they're way off, the try the above.

...Robert

In response to

pgsql-performance by date

Next:From: Ben ChobotDate: 2009-04-21 16:02:58
Subject: Re: performance for high-volume log insertion
Previous:From: Kenneth MarshallDate: 2009-04-21 15:44:58
Subject: Re: performance for high-volume log insertion

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group