From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Patrick B <patrickbakerbr(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Help with slow query - Pgsql 9.2 |
Date: | 2016-09-06 18:51:47 |
Message-ID: | CAMkU=1yvYTwtuAFKq9nvVs-ti1TFXyYG=sBWAWNZ795xMMw2gw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Sep 5, 2016 at 6:53 PM, Patrick B <patrickbakerbr(at)gmail(dot)com> wrote:
> Hi guys,
>
> I got this query:
>
>> SELECT id,jobid,description,serialised_data
>> FROM logtable
>> WHERE log_type = 45
>> AND clientid = 24011
>> ORDER BY gtime desc
>
>
What is really going to help you here is multicolumn index on (clientid,
log_type), or (log_type, clientid).
It will not cost you much, because you can get rid of whichever
single-column index is on the column you list first in your multi-column
index.
>
>
> Explain analyze: https://explain.depesz.com/s/XKtU
>
> So it seems the very slow part is into:
>
> -> Bitmap Index Scan on "ix_client" (cost=0.00..5517.96
>> rows=367593 width=0) (actual time=2668.246..2668.246 rows=356327 loops=1)
>> Index Cond: ("clientid" = 24011)
>
>
> Am I right? The query is already using an index on that table... how could
> I improve the performance in a query that is already using an index?
>
Right, that is the slow step. Probably the index is not already in memory
and had to be read from disk, slowly. You could turn track_io_timing on
and then run explain (analyze, buffers) to see if that is the case. But
once you build a multi-column index, it shouldn't really matter anymore.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Christian Convey | 2016-09-06 18:56:18 | Re: [GENERAL] C++ port of Postgres |
Previous Message | Naveed Shaikh | 2016-09-06 18:08:19 | Re: PostgreSQL Database performance |