From: | Kevin Kempter <kevink(at)consistentstate(dot)com> |
---|---|
To: | "pgsql-performance" <pgsql-performance(at)postgresql(dot)org> |
Subject: | performance tuning queries |
Date: | 2008-11-27 04:21:04 |
Message-ID: | 200811262121.05463.kevin@kevinkempterllc.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi All;
I'm looking for tips / ideas per performance tuning some specific queries.
These are generally large tables on a highly active OLTP system
(100,000 - 200,000 plus queries per day)
First off, any thoughts per tuning inserts into large tables. I have a large
table with an insert like this:
insert into public.bigtab1 (text_col1, text_col2, id) values ...
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0)
(1 row)
The query cost is low but this is one of the slowest statements per pgfouine
Next we have a select count(*) that also one of the top offenders:
select count(*) from public.tab3 where user_id=31
and state='A'
and amount>0;
QUERY PLAN
-----------------------------------------------------------------------------------------------------
Aggregate (cost=3836.53..3836.54 rows=1 width=0)
-> Index Scan using order_user_indx ontab3 user_id (cost=0.00..3834.29
rows=897 width=0)
Index Cond: (idx_user_id = 31406948::numeric)
Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric))
(4 rows)
We have an index on the user_id but not on the state or amount,
add index to amount ?
Thoughts ?
From | Date | Subject | |
---|---|---|---|
Next Message | A. Kretschmer | 2008-11-27 06:12:26 | Re: performance tuning queries |
Previous Message | Gregory Stark | 2008-11-27 00:48:14 | Re: Partition table query performance |