From: | Igor Neyman <ineyman(at)perceptron(dot)com> |
---|---|
To: | zach cruise <zachc1980(at)gmail(dot)com>, Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> |
Cc: | PostgreSQL <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: how would you speed up this long query? |
Date: | 2015-04-01 13:32:09 |
Message-ID: | A76B25F2823E954C9E45E32FA49D70ECCD410DE5@mail.corp.perceptron.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
-----Original Message-----
From: pgsql-general-owner(at)postgresql(dot)org [mailto:pgsql-general-owner(at)postgresql(dot)org] On Behalf Of zach cruise
Sent: Tuesday, March 31, 2015 2:56 PM
To: Gavin Flower
Cc: PostgreSQL
Subject: Re: [GENERAL] how would you speed up this long query?
> Version of PostgreSQL?
9.3
> Operating system?
win
> Hardware configuration?
8 gb ram. takes about 7000 ms to retrieve about 7000 rows.
max_connections = 200
shared_buffers = 512mb
effective_cache_size = 6gb
work_mem = 13107kb
maintenance_work_mem = 512mb
checkpoint_segments = 32
checkpoint_completion_target = 0.7
wal_buffers = 16mb
default_statistics_target = 100
> Indexes?
no
> Anything else that might be relevant?
no
> What have you already done to investigate?
moved subquery from "where" to "from" to evaluate once instead of once per row
> EXPLAIN ANALYZE output?
would help if you can help us understand what's going on here:
"Group (cost=5520.89..6335.03 rows=18092 width=199) (actual
time=3864.186..4402.447 rows=5512 loops=1)"
" -> Sort (cost=5520.89..5566.12 rows=18092 width=199) (actual
time=3864.171..4146.725 rows=97141 loops=1)"
" Sort Key: z0.zcg, proj.pid, proj.tit, proj.num, detail.cid,
detail.id, det (...)"
" Sort Method: external merge Disk: 21648kB"
" -> Hash Join (cost=3541.48..4241.51 rows=18092 width=199)
(actual time=254.216..432.629 rows=97141 loops=1)"
" Hash Cond: (org.id = detail.id)"
" -> Hash Join (cost=752.72..1036.45 rows=4955
width=109) (actual time=64.492..86.822 rows=4977 loops=1)"
" Hash Cond: (org.pid = proj.pid)"
" -> Seq Scan on org (cost=0.00..196.82
rows=4982 width=26) (actual time=0.024..6.199 rows=4982 loops=1)"
" -> Hash (cost=702.97..702.97 rows=3980
width=91) (actual time=64.439..64.439 rows=3973 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 465kB"
" -> Hash Join (cost=424.04..702.97
rows=3980 width=91) (actual time=20.994..52.773 rows=3973 loops=1)"
" Hash Cond: (org_1.pid = proj.pid)"
" -> Seq Scan on org org_1
(cost=0.00..209.28 rows=3980 width=8) (actual time=0.016..10.815
rows=3980 loops=1)"
" Filter: ((open)::text = 'Y'::text)"
" Rows Removed by Filter: 1002"
" -> Hash (cost=374.02..374.02
rows=4002 width=83) (actual time=20.950..20.950 rows=4002 loops=1)"
" Buckets: 1024 Batches: 1
Memory Usage: 424kB"
" -> Seq Scan on proj
(cost=0.00..374.02 rows=4002 width=83) (actual time=0.010..9.810
rows=4002 loops=1)"
" -> Hash (cost=2716.44..2716.44 rows=5786 width=98)
(actual time=189.677..189.677 rows=4959 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 629kB"
" -> Hash Join (cost=2369.71..2716.44 rows=5786
width=98) (actual time=169.635..182.956 rows=4959 loops=1)"
" Hash Cond: (org_2.id = detail.id)"
" -> Seq Scan on org org_2
(cost=0.00..209.28 rows=3980 width=8) (actual time=0.015..4.194
rows=3980 loops=1)"
" Filter: ((open)::text = 'Y'::text)"
" Rows Removed by Filter: 1002"
" -> Hash (cost=2340.92..2340.92 rows=2303
width=90) (actual time=169.596..169.596 rows=1964 loops=1)"
" Buckets: 1024 Batches: 1 Memory
Usage: 224kB"
" -> Hash Join
(cost=2069.93..2340.92 rows=2303 width=90) (actual
time=159.126..166.937 rows=1964 loops=1)"
" Hash Cond: ((detail.z)::text =
(z0.zcg)::text)"
" -> Seq Scan on detail
(cost=0.00..199.03 rows=2303 width=52) (actual time=0.009..2.152
rows=2303 loops=1)"
" -> Hash
(cost=1538.30..1538.30 rows=42530 width=38) (actual
time=159.070..159.070 rows=42530 loops=1)"
" Buckets: 8192 Batches:
1 Memory Usage: 2451kB"
" -> Seq Scan on z0
(cost=0.00..1538.30 rows=42530 width=38) (actual time=0.010..82.125
rows=42530 loops=1)"
"Total runtime: 4414.655 ms"
--
Didn't see replies to this message, so...
Your query spends most of the time on sorting:
" Sort Method: external merge Disk: 21648kB"
and it doesn't fit in memory.
Try increasing work_mem somewhat to 50MB, you could do it for this particular connection only, if you don't want to change it for the whole server.
Regards,
Igor Neyman
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2015-04-01 13:34:46 | Re: Would like to know how analyze works technically |
Previous Message | TonyS | 2015-04-01 13:26:36 | Re: Would like to know how analyze works technically |