From: | Avinash Vallarapu <avinash(dot)vallarapu(at)gmail(dot)com> |
---|---|
To: | Kenny Bachman <kenny(dot)bachman17(at)gmail(dot)com> |
Cc: | pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Re: postgresql-14 slow query |
Date: | 2022-04-16 05:21:52 |
Message-ID: | CAN0Tujf+Eu85D4Lc485ATCBiiDFs-d5rktXw7nr__+QkoCuwZA@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
While I do not have much clarity into the schema and a lot of other
insights, you might want to see my points below.
On Fri, Apr 15, 2022 at 4:59 PM Kenny Bachman <kenny(dot)bachman17(at)gmail(dot)com>
wrote:
> Hello Team,
>
> How can I tune this query? It got even slower when I created the index for
> (state_id, order_id desc). The following explain analyze output is
> without an index. It takes 13 seconds if I create that index. Could you
> help me?
>
> Thank you so much for your help.
>
> SELECT DISTINCT ON (order_history.order_id) order_id,
> order_history.creation_date AS c_date
> FROM work.order_history WHERE (order_history.state_id = ANY (ARRAY[30, 51,
> 63, 136, 195, 233, 348])) AND order_history.is_false = 0
> ORDER BY order_history.order_id DESC;
>
> EXPLAIN ANALYZE output:
>
> Unique (cost=672007.46..1519683.55 rows=206423 width=12) (actual
> time=1701.420..3439.095 rows=2049357 loops=1)
>
Planner estimates vs actual rows seems to have some variation.
Firstly, have you manually updated statistics and verified if estimates can
be almost near to actual ?
VACUUM ANALYZE work.order_history;
-> Gather Merge (cost=672007.46..1502346.48 rows=6934827 width=12)
> (actual time=1701.419..2989.243 rows=6891551 loops=1)
> Workers Planned: 4
> Workers Launched: 4
> -> Sort (cost=671007.40..675341.67 rows=1733707 width=12)
> (actual time=1657.609..1799.723 rows=1378310 loops=5)
> Sort Key: order_id DESC
> Sort Method: external merge Disk: 38960kB
> Worker 0: Sort Method: external merge Disk: 31488kB
> Worker 1: Sort Method: external merge Disk: 36120kB
> Worker 2: Sort Method: external merge Disk: 31368kB
> Worker 3: Sort Method: external merge Disk: 36152kB
>
Again, I am not suggesting a query rewrite or a change in schema (indexing
or etc) as I do not have much clarity and thus not discussing it here.
However, setting a session level work_mem of 40MB should show some
immediate difference.
SET work_mem TO '40MB';
<Run EXPLAIN ANALYZE again and check>
-> Parallel Seq Scan on order_history
> (cost=0.00..473993.00 rows=1733707 width=12) (actual time=0.041..1211.485
> rows=1378310 loops=5)
> Filter: ((is_false = 0) AND (state_id = ANY
> ('{30,51,63,136,195,233,348}'::integer[])))
> Rows Removed by Filter: 3268432
> Planning Time: 0.405 ms
> Execution Time: 3510.433 ms
>
--
Regards,
Avinash Vallarapu
CEO,
MigOps, Inc.
www.migops.com
From | Date | Subject | |
---|---|---|---|
Next Message | Paul Smith | 2022-04-16 08:41:50 | Re: postgresql-14 slow query |
Previous Message | Jeff Janes | 2022-04-16 04:37:10 | Re: postgresql-14 slow query |