Re: Query optimization

From: Marc Cousin <cousinmarc(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Query optimization
Date: 2008-11-30 19:37:39
Message-ID: 200811302037.39618.cousinmarc@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Le Sunday 30 November 2008 19:45:11 tmp, vous avez écrit :
> I am struggeling with the following query which fetches a random subset
> of 200 questions that matches certain tags within certain languages.
> However, the query takes forever to evaluate, even though I have a
> "limit 200" appended. Any ideas on how to optimize it?
>
> QUERY: ================================================
>
> SELECT distinct q.question_id
> FROM question_tags qt, questions q
> WHERE q.question_id = qt.question_id
> AND q.STATUS = 1
> AND not q.author_id = 105
> AND ((qt.language_id = 5 and qt.tag_id in
> (1101,917,734,550,367,183)) or (qt.language_id = 4 and qt.tag_id in
> (856,428)) or (qt.language_id =
> 3 and qt.tag_id in (1156,1028,899,771,642,514,385,257,128)) or
> (qt.language_id = 2 and qt.tag_id in
> (1193,1101,1009,917,826,734,642,550,458,367,275,183,91)))
> and q.question_id not in (413)
> LIMIT 200
>
> EXPLAIN ANALYZE: =========================================
>
> Limit (cost=1.50..1267.27 rows=200 width=4) (actual
> time=278.169..880.934 rows=200 loops=1)
> -> Unique (cost=1.50..317614.50 rows=50185 width=4) (actual
> time=278.165..880.843 rows=200 loops=1)
> -> Merge Join (cost=1.50..317489.04 rows=50185 width=4)
> (actual time=278.162..880.579 rows=441 loops=1)
> Merge Cond: (qt.question_id = q.question_id)
> -> Index Scan using question_tags_question_id on
> question_tags qt (cost=0.00..301256.96 rows=82051 width=4) (actual
> time=24.171..146.811 rows=6067 loops=1)
> Filter: (((language_id = 5) AND (tag_id = ANY
> ('{1101,917,734,550,367,183}'::integer[]))) OR ((language_id = 4) AND
> (tag_id = ANY ('{856,428}'::integer[]))) OR ((language_id = 3) AND
> (tag_id = ANY ('{1156,1028,899,771,642,514,385,257,128}'::integer[])))
> OR ((language_id = 2) AND (tag_id = ANY
> ('{1193,1101,1009,917,826,734,642,550,458,367,275,183,91}'::integer[]))))
> -> Index Scan using questions_pkey on questions q
> (cost=0.00..15464.12 rows=83488 width=4) (actual time=222.956..731.737
> rows=1000 loops=1)
> Filter: ((q.author_id <> 105) AND (q.question_id
> <> 413) AND (q.status = 1))
> Total runtime: 881.152 ms
> (9 rows)

First, because of the distinct, the limit 200 wont reduce the work to be done
a lot : it will still have to sort for the unique. Only when everything is
sorted will it take only the first 200 records. And anyhow it seems there are
only 441 rows before doing the distinct, so, at least for this query, it
won't change a lot the times.

Then it seems to me that you may try to create composed indexes, because there
is a lot of filtering after the index scans (that is if you want the query to
be faster).
Maybe (q.author_id,q.status).

For qt you may try (qt.language_id,qt.tag_id)...

Hope it helps

Cheers

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Andrus 2008-11-30 20:17:34 Seq scan over 3.3 millions of rows instead of using date and pattern indexes
Previous Message tmp 2008-11-30 18:45:11 Query optimization