Re: query cost too high, anyway to reduce it

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: nair331(at)gmail(dot)com
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: query cost too high, anyway to reduce it
Date: 2009-12-05 20:45:07
Message-ID: dcc563d10912051245l5336c7b3w8b1056c846e46b66@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Dec 4, 2009 at 3:15 AM, nair rajiv <rajivnair(at)gnu(dot)org> wrote:

> We populated the table with data and used EXPLAIN
>
>
> dbpedia=# EXPLAIN   SELECT   nid,max(ssid) FROM gbobjects  where ssid<=
> 100000  group by nid  ;
>
>               QUERY PLAN
> --------------------------------------------------------------------------------------------------
>  GroupAggregate  (cost=20966.03..22944.49 rows=98923 width=27)
>    ->  Sort  (cost=20966.03..21213.34 rows=98923 width=27)
>          Sort Key: nid
>          ->  Index Scan using ssid_object on gbobjects  (cost=0.00..10388.88
> rows=98923 width=27)
>                Index Cond: (ssid <= 100000)
>
>
> Total rows : 875459
>
>
> The cost is very high.

Compared to what?

> Is there a way to reduce the cost ?. We have kept the
> postgresql configuration files as it is i.e. they are the default
> configuration
> files.
> Can the cost be reduced by changing some parameters in
> postgresql.conf file. If yes which are those parameters ?

Sure you can change the numbers for random_page_cost and
sequential_page_cost, but the query isn't gonna run faster. You're
retrieving 875k rows, that's never gonna be cheap.

Better is to run explain analyze and look at the times you're getting
for each step in the query plan.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2009-12-05 23:09:54 Re: performance while importing a very large data set in to database
Previous Message Scott Marlowe 2009-12-05 20:42:17 Re: performance while importing a very large data set in to database