Performance issues when the number of records are around 10 Million

From: venu madhav <venutaurus539(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Performance issues when the number of records are around 10 Million
Date: 2010-05-11 08:47:57
Message-ID: AANLkTin1f-9M0iNmIEtU7YlvEsVAt6TmHE1vDk3iUmQq@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second in
the peak times. I configured postgres to do auto vacuum on hourly basis. I
have frontend GUI application in CGI which displays the data from the
database. When I try to get the last twenty records from the database, it
takes around 10-15 mins to complete the operation.This is the query which
is used:

* select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp <
'1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;
*
Can any one suggest me a better solution to improve the performance.

Please let me know if you've any further queries.

Thank you,
Venu

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Carlo Stonebanks 2010-05-11 18:00:43 Re: Function scan/Index scan to nested loop
Previous Message Craig Ringer 2010-05-11 07:14:08 Re: Function scan/Index scan to nested loop