Re: Slow query with 100% cpu

From: Clodoaldo <clodoaldo(dot)pinto(dot)neto(at)gmail(dot)com>
To: "PostgreSQL - General ML" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Slow query with 100% cpu
Date: 2008-02-27 12:11:51
Message-ID: a595de7a0802270411j6bca7b11g813e29d79659d84c@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I just ANALYZEd the database and times are back to normal. Sorry for the noise.

Regards, Clodoaldo Pinto Neto

2008/2/27, Clodoaldo <clodoaldo(dot)pinto(dot)neto(at)gmail(dot)com>:
> Postgresql 8.2.6, Fedora 8, 2 GB memory.
>
> A query that used to perform in a few seconds is now taking 64 seconds
> with 100% cpu:
>
> fahstats=> explain analyze
> fahstats-> select
> fahstats-> donor::smallInt as new_members,
> fahstats-> active_members,
> fahstats-> d.data::date as day,
> fahstats-> isodow(d.data::date)::smallInt as dow
> fahstats-> from
> fahstats-> team_active_members_history as tam
> fahstats-> inner join
> fahstats-> datas as d on d.data_serial = tam.serial_date
> fahstats-> inner join
> fahstats-> select_new_members(13802, 8) as nm on nm.data::date = d.data::date
> fahstats-> where
> fahstats-> team_number = 13802
> fahstats-> and
> fahstats-> d.data_serial = (
> fahstats(> select data_serial
> fahstats(> from datas
> fahstats(> where date_trunc('day', data) = date_trunc('day', d.data)
> fahstats(> order by data desc
> fahstats(> limit 1
> fahstats(> )
> fahstats-> ;
>
> QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------
> Hash Join (cost=46026.35..120281.84 rows=1020 width=16) (actual
> time=64110.099..64404.762 rows=56 loops=1)
> Hash Cond: ((d.data)::date = (nm.data)::date)
> -> Hash Join (cost=46001.35..120154.33 rows=204 width=12) (actual
> time=3378.246..63602.774 rows=812 loops=1)
> Hash Cond: (d.data_serial = tam.serial_date)
> -> Index Scan using data_ndx on datas d
> (cost=0.00..73990.18 rows=32 width=12) (actual time=19.474..63583.966
> rows=879 loops=1)
> Filter: (data_serial = (subplan))
> SubPlan
> -> Limit (cost=0.00..11.69 rows=1 width=12) (actual
> time=10.076..10.078 rows=1 loops=6302)
> -> Index Scan Backward using pk_data on datas
> (cost=0.00..374.05 rows=32 width=12) (actual time=10.070..10.070
> rows=1 loops=6302)
> Filter: (date_trunc('day'::text, data) =
> date_trunc('day'::text, $0))
> -> Hash (cost=45499.27..45499.27 rows=40167 width=8)
> (actual time=10.398..10.398 rows=821 loops=1)
> -> Bitmap Heap Scan on team_active_members_history tam
> (cost=664.25..45499.27 rows=40167 width=8) (actual time=0.428..9.110
> rows=821 loops=1)
> Recheck Cond: (team_number = 13802)
> -> Bitmap Index Scan on
> ndx_team_active_members_history (cost=0.00..654.21 rows=40167
> width=0) (actual time=0.314..0.314 rows=821 loops=1)
> Index Cond: (team_number = 13802)
> -> Hash (cost=12.50..12.50 rows=1000 width=12) (actual
> time=794.528..794.528 rows=56 loops=1)
> -> Function Scan on select_new_members nm (cost=0.00..12.50
> rows=1000 width=12) (actual time=794.356..794.411 rows=56 loops=1)
> Total runtime: 64405.040 ms
> (18 rows)
>
> I REINDEXed the database before this explain analyze. These are the
> non commented lines of postgresql.conf
>
> listen_addresses = '127.0.0.1,10.1.1.101' # what IP address(es) to listen on;
> max_connections = 150 # (change requires restart)
> shared_buffers = 64MB # min 128kB or max_connections*16kB
> max_prepared_transactions = 150 # can be 0 or more
> work_mem = 32MB # min 64kB
> maintenance_work_mem = 768MB # min 1MB
> max_stack_depth = 4MB # min 100kB
> max_fsm_pages = 1000000 # min max_fsm_relations*16, 6 bytes each
> vacuum_cost_delay = 1000 # 0-1000 milliseconds
> vacuum_cost_limit = 5000 # 0-10000 credits
> bgwriter_lru_maxpages = 0 # 0-1000 buffers max written/round
> bgwriter_all_maxpages = 0 # 0-1000 buffers max written/round
> fsync = off # turns forced synchronization on or off
> full_page_writes = off
> wal_buffers = 1024kB # min 32kB
> checkpoint_segments = 8 # in logfile segments, min 1, 16MB each
> checkpoint_timeout = 10min # range 30s-1h
> effective_cache_size = 768MB
> geqo_effort = 8 # range 1-10
> default_statistics_target = 50 # range 1-1000
> redirect_stderr = on # Enable capturing of stderr into log
> log_directory = 'pg_log' # Directory where log files are written
> log_truncate_on_rotation = on # If on, any existing log file of the same
> log_rotation_age = 1d # Automatic rotation of logfiles will
> log_rotation_size = 0 # Automatic rotation of logfiles will
> debug_pretty_print = on
> datestyle = 'iso, ymd'
> timezone = UTC # actually, defaults to TZ
> extra_float_digits = 2 # min -15, max 2
> lc_messages = 'en_US.UTF-8' # locale for system error message
> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
> lc_numeric = 'en_US.UTF-8' # locale for number formatting
> lc_time = 'en_US.UTF-8' # locale for time formatting
>
>
>
> Same query in another computer with 4 GB memory, 8.2.6, Fedora 8:
>
> fahstats=> explain analyze
> fahstats-> select
> fahstats-> donor::smallInt as new_members,
> fahstats-> active_members,
> fahstats-> d.data::date as day,
> fahstats-> isodow(d.data::date)::smallInt as dow
> fahstats-> from
> fahstats-> team_active_members_history as tam
> fahstats-> inner join
> fahstats-> datas as d on d.data_serial = tam.serial_date
> fahstats-> inner join
> fahstats-> select_new_members(13802, 8) as nm on nm.data::date = d.data::date
> fahstats-> where
> fahstats-> team_number = 8
> fahstats-> and
> fahstats-> d.data_serial = (
> fahstats(> select data_serial
> fahstats(> from datas
> fahstats(> where date_trunc('day', data) = date_trunc('day', d.data)
> fahstats(> order by data desc
> fahstats(> limit 1
> fahstats(> )
> fahstats-> ;
>
> QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> Nested Loop (cost=0.00..12990.31 rows=10 width=16) (actual
> time=431.023..3448.132 rows=56 loops=1)
> Join Filter: ((nm.data)::date = (d.data)::date)
> -> Nested Loop (cost=0.00..12930.16 rows=2 width=12) (actual
> time=0.080..2977.293 rows=823 loops=1)
> -> Index Scan using ndx_team_active_members_history on
> team_active_members_history tam (cost=0.00..1221.93 rows=449 width=8)
> (actual time=0.040..2.521 rows=832 loops=1)
> Index Cond: (team_number = 8)
> -> Index Scan using data_ndx on datas d (cost=0.00..26.06
> rows=1 width=12) (actual time=3.574..3.574 rows=1 loops=832)
> Index Cond: (d.data_serial = tam.serial_date)
> Filter: (data_serial = (subplan))
> SubPlan
> -> Limit (cost=0.00..24.38 rows=1 width=12) (actual
> time=3.589..3.589 rows=1 loops=823)
> -> Index Scan Backward using pk_data on datas
> (cost=0.00..780.05 rows=32 width=12) (actual time=3.588..3.588 rows=1
> loops=823)
> Filter: (date_trunc('day'::text, data) =
> date_trunc('day'::text, $0))
> -> Function Scan on select_new_members nm (cost=0.00..12.50
> rows=1000 width=12) (actual time=0.524..0.534 rows=56 loops=823)
> Total runtime: 3448.225 ms
> (14 rows)
>
> Regards, Clodoaldo Pinto Neto
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Kynn Jones 2008-02-27 12:39:51 How to "paste two tables side-by-side"?
Previous Message Clodoaldo 2008-02-27 11:48:44 Slow query with 100% cpu