Postgresql performace question

From: "Mark Jones" <mlist(at)hackerjones(dot)org>
To: pgsql-hackers(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org
Subject: Postgresql performace question
Date: 2003-03-02 23:52:37
Message-ID: sthaj-ov7.ln1@news.hackerjones.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers

Hello

I am working on a project that acquires real-time data from an external
device that I need to store and be able to search through and retrieve
quickly. My application receives packets of data ranging in size from 300 to
5000 bytes every 50 milliseconds for the minimum duration of 24 hours before
the data is purged or archived off disk. There are several fields in the
data that I like to be able to search on to retrieve the data at later time.
By using a SQL database such as Postgresql or Mysql it seams that it would
make this task much easier. My questions are, is a SQL database such as
Postgresql able to handle this kind of activity saving a record of 5000
bytes at rate of 20 times a second, also how well will it perform at
searching through a database which contains nearly two million records at a
size of about 8 - 9 gigabytes of data, assuming that I have adequate
computing hardware. I am trying to determine if a SQL database would work
well for this or if I need to write my own custom database for this project.
If anyone has any experience in doing anything similar with Postgresql I
would love to know about your findings.

Thanks
Mark

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Doug McNaught 2003-03-03 00:25:46 Re: Hosting a data file on a SAN
Previous Message Tom Lane 2003-03-02 20:53:09 Re: pg_relcheck

Browse pgsql-hackers by date

  From Date Subject
Next Message Rod Taylor 2003-03-03 00:35:45 Re: Postgresql performace question
Previous Message Kevin Brown 2003-03-02 21:43:34 Re: GiST: Bad newtup On Exit From gistSplit() ?