Skip site navigation (1) Skip section navigation (2)

Is this way of testing a bad idea?

From: "Fredrik Israelsson" <fredrik(dot)israelsson(at)eu(dot)biotage(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Is this way of testing a bad idea?
Date: 2006-08-24 09:04:28
Message-ID: B6D0C6EF9C7B5C48AB71BDB7334C9FFABD5522@seuppms101.eu.companyb.com (view raw or flat)
Thread:
Lists: pgsql-performance
I am evaluating PostgreSQL as a candiate to cooperate with a java
application.

Performance test set up:
Only one table in the database schema.
The tables contains a bytea column plus some other columns.
The PostgreSQL server runs on Linux.

Test execution:
The java application connects throught TCP/IP (jdbc) and performs 50000
inserts.

Result:
Monitoring the processes using top reveals that the total amount of
memory used slowly increases during the test. When reaching insert
number 40000, or somewhere around that, memory is exhausted, and the the
systems begins to swap. Each of the postmaster processes seem to use a
constant amount of memory, but the total memory usage increases all the
same.

Questions:
Is this way of testing the performance a bad idea? Actual database usage
will be a mixture of inserts and queries. Maybe the test should behave
like that instead, but I wanted to keep things simple.
Why is the memory usage slowly increasing during the whole test?
Is there a way of keeping PostgreSQL from exhausting memory during the
test? I have looked for some fitting parameters to used, but I am
probably to much of a novice to understand which to choose.

Thanks in advance,
Fredrik Israelsson

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2006-08-24 12:51:16
Subject: Re: Is this way of testing a bad idea?
Previous:From: Jason MinionDate: 2006-08-24 05:30:56
Subject: Re: [PERFORM] Query tuning

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group