Is this way of testing a bad idea?

From: "Fredrik Israelsson" <fredrik(dot)israelsson(at)eu(dot)biotage(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Is this way of testing a bad idea?
Date: 2006-08-24 09:04:28
Message-ID: B6D0C6EF9C7B5C48AB71BDB7334C9FFABD5522@seuppms101.eu.companyb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I am evaluating PostgreSQL as a candiate to cooperate with a java
application.

Performance test set up:
Only one table in the database schema.
The tables contains a bytea column plus some other columns.
The PostgreSQL server runs on Linux.

Test execution:
The java application connects throught TCP/IP (jdbc) and performs 50000
inserts.

Result:
Monitoring the processes using top reveals that the total amount of
memory used slowly increases during the test. When reaching insert
number 40000, or somewhere around that, memory is exhausted, and the the
systems begins to swap. Each of the postmaster processes seem to use a
constant amount of memory, but the total memory usage increases all the
same.

Questions:
Is this way of testing the performance a bad idea? Actual database usage
will be a mixture of inserts and queries. Maybe the test should behave
like that instead, but I wanted to keep things simple.
Why is the memory usage slowly increasing during the whole test?
Is there a way of keeping PostgreSQL from exhausting memory during the
test? I have looked for some fitting parameters to used, but I am
probably to much of a novice to understand which to choose.

Thanks in advance,
Fredrik Israelsson

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2006-08-24 12:51:16 Re: Is this way of testing a bad idea?
Previous Message Jason Minion 2006-08-24 05:30:56 Re: [PERFORM] Query tuning