Skip site navigation (1) Skip section navigation (2)

Re: Postgres query completion status?

From: Richard Neill <rn214(at)cam(dot)ac(dot)uk>
To: Greg Williamson <gwilliamson39(at)yahoo(dot)com>
Cc: Richard Neill <rn214(at)cam(dot)ac(dot)uk>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres query completion status?
Date: 2009-11-20 06:32:31
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Thanks for your help. This issue splits into 2 bits:

1. Fixing specific queries.

2. Finding out when a specific running query is going to complete.
(At the moment, this is the bit I really need to know).

Greg Williamson wrote:
> Richard --
>  You might post the results of "EXPLAIN ANALYZE <your SQL here>;" ... be sure to run it in a transaction if you want to be able roll it back. Perhaps try "EXPLAIN <your SQL>;" first as it is faster, but EXPLAIN ANALYZE shows what the planner is doing.

The offending query (simplified to just do a select - which is the slow
bit) is:

SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining ) FROM
core.demand, viwcs.previous_wave LEFT OUTER JOIN viwcs.wave_end_demand
USING ( wid ) WHERE = viwcs.wave_end_demand.demand_id;

Over the last few weeks, this has gradually slowed down from 6 minutes
to about 6.5, then last night it took 25, and today it's taken an hour
already and still not completed. The system hasn't been doing anything
special in the last 2 days.

Here's EXPLAIN  (Explain analyze will take too long!)

  Nested Loop  (cost=885367.03..1123996.87 rows=8686 width=12)
    ->  Merge Join  (cost=885367.03..1115452.17 rows=8688 width=16)
          Merge Cond: ((core.demand.target_id =
wave_genreorders_map.target_id) AND (core.demand.material_id =
          ->  Index Scan using demand_target_id_key on demand
(cost=0.00..186520.46 rows=3800715 width=24)
          ->  Sort  (cost=885364.61..893425.30 rows=3224275 width=24)
                Sort Key: wave_genreorders_map.target_id,
                ->  Hash Join  (cost=511934.12..536811.73 rows=3224275
                      Hash Cond: (core.material.tag =
                      ->  Append  (cost=0.00..10723.27 rows=689377 width=28)
                            ->  Seq Scan on material
(cost=0.00..5474.75 rows=397675 width=21)
                            ->  Seq Scan on container material
(cost=0.00..5248.52 rows=291702 width=37)
                      ->  Hash  (cost=506657.25..506657.25 rows=422149
                            ->  Hash Join  (cost=474610.85..506657.25
rows=422149 width=42)
                                  Hash Cond: ((wave_gol.sid)::text =
                                  ->  Merge Left Join
(cost=463919.35..487522.78 rows=422149 width=29)
                                        Merge Cond:
(((wave_gol.wid)::text = (du_report_sku.wid)::text) AND
((wave_gol.storeorderid)::text = (du_report_sku.storeorderid)::text) AND
((wave_gol.genreorderid)::text = (du_report_sku.genreorderid)::text))
                                        Join Filter:
((wave_gol.sid)::text = (du_report_sku.sid)::text)
                                        ->  Merge Join
(cost=183717.70..197229.24 rows=422149 width=44)
                                              Merge Cond:
(((wave_genreorders_map.wid)::text = (wave_gol.wid)::text) AND
((wave_genreorders_map.storeorderid)::text =
(wave_gol.storeorderid)::text) AND
((wave_genreorders_map.genreorderid)::text = (wave_gol.genreorderid)::text))
                                              ->  Index Scan using
"wave_genreorders_map_ERR_GENREORDERID_EXISTS" on wave_genreorders_map
(cost=0.00..4015.36 rows=116099 width=27)
                                              ->  Sort
(cost=183717.70..184818.90 rows=440483 width=47)
                                                    Sort Key:
wave_gol.wid, wave_gol.storeorderid, wave_gol.genreorderid
                                                    ->  Nested Loop
(cost=9769.36..142425.22 rows=440483 width=47)
                                                          ->  Index Scan
using "wave_rxw_ERR_WID_EXISTS" on wave_rxw  (cost=0.00..7.08 rows=1
                                                          ->  Bitmap
Heap Scan on wave_gol  (cost=9769.36..136912.11 rows=440483 width=36)
Cond: ((wave_gol.wid)::text = (wave_rxw.wid)::text)
Bitmap Index Scan on "wave_gol_ERR_SID_EXISTS"  (cost=0.00..9659.24
rows=440483 width=0)

Index Cond: ((wave_gol.wid)::text = (wave_rxw.wid)::text)
                                        ->  Sort
(cost=280201.66..281923.16 rows=688602 width=300)
                                              Sort Key:
du_report_sku.wid, du_report_sku.storeorderid, du_report_sku.genreorderid
                                              ->  HashAggregate
(cost=197936.75..206544.27 rows=688602 width=36)
                                                    ->  Seq Scan on
du_report_sku  (cost=0.00..111861.61 rows=6886011 width=36)
                                  ->  Hash  (cost=5681.22..5681.22
rows=400822 width=13)
                                        ->  Seq Scan on product_info_sku
  (cost=0.00..5681.22 rows=400822 width=13)
    ->  Index Scan using demand_pkey on demand  (cost=0.00..0.97 rows=1
          Index Cond: ( =
(37 rows)


> You wrote:
>> P.S. Sometimes, some queries seem to benefit from being cancelled and then immediately
>> re-started. As there are being run in a transaction, I can't see how this could make a difference.
>> Am I missing anything obvious?  Occasionally, a re-start of postgresql-8.4l itself seems to help.
> This may be the result of caching of the desired rows, either by PostgreSQL or by your operating system. The rollback wouldn't effect this -- the rows are already in memory and not on 
disk waiting to be grabbed -- much faster on all subsequent queries.

Yes...but the data should already be in RAM. We've got 20 GB of it,
(Postgres is given 5GB, and the effective_cache_size is 10GB); the
dataset size for the relevant part should only be about 100 MB at the most.

Also we're using solid-state disks (Intel X25-E), and iotop shows that
the disk access rate isn't the problem; the CPU is pegged at 100% though.

It seems to be that the query-planner is doing something radically


In response to


pgsql-performance by date

Next:From: Richard NeillDate: 2009-11-20 06:50:14
Subject: Re: Postgres query completion status?
Previous:From: Greg WilliamsonDate: 2009-11-20 06:07:08
Subject: Re: Postgres query completion status?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group