I am evaluating PostgreSQL as a candiate to cooperate with a java
Performance test set up:
Only one table in the database schema.
The tables contains a bytea column plus some other columns.
The PostgreSQL server runs on Linux.
The java application connects throught TCP/IP (jdbc) and performs 50000
Monitoring the processes using top reveals that the total amount of
memory used slowly increases during the test. When reaching insert
number 40000, or somewhere around that, memory is exhausted, and the the
systems begins to swap. Each of the postmaster processes seem to use a
constant amount of memory, but the total memory usage increases all the
Is this way of testing the performance a bad idea? Actual database usage
will be a mixture of inserts and queries. Maybe the test should behave
like that instead, but I wanted to keep things simple.
Why is the memory usage slowly increasing during the whole test?
Is there a way of keeping PostgreSQL from exhausting memory during the
test? I have looked for some fitting parameters to used, but I am
probably to much of a novice to understand which to choose.
Thanks in advance,
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2006-08-24 12:51:16|
|Subject: Re: Is this way of testing a bad idea? |
|Previous:||From: Jason Minion||Date: 2006-08-24 05:30:56|
|Subject: Re: [PERFORM] Query tuning|