Postgres performace with large tables.

From: Wim <wdh(at)belbone(dot)be>
To: pgsql-novice(at)postgresql(dot)org
Subject: Postgres performace with large tables.
Date: 2003-02-05 15:28:36
Message-ID: 3E412DA4.6090902@belbone.be
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Hello All,

I have a database with the inventory of my backbone routers. The 30
routers are stored in table.
Every 5 minutes, I insert the interface counters into a table for all
the routers.
When my table (counters_table) has less than ,let's say, 100000 records,
the insert is done within the 50 seconds for all the routers.
I noticed that, when my table contains more than 500000 records, the
insert takes about 230 seconds...

My DB runs on a Pentium III 512MB RAM and one CPU 1.13GHz

I used EXPLAIN to test my select queries, but everyting seems fine
(right use of indexes...)

How can I speed up the insert (and improve the performance).

Thanx!

Wim

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Bruno Wolff III 2003-02-05 16:39:51 Re: SQL Intersect like problem
Previous Message Steve_Miller 2003-02-05 14:09:39 Unicode strings and data types