Re: Parallel queries for a web-application |performance testing

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Balkrishna Sharma" <b_ki(at)hotmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Parallel queries for a web-application |performance testing
Date: 2010-06-16 21:19:06
Message-ID: 4C18F97A02000025000324CD@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Balkrishna Sharma <b_ki(at)hotmail(dot)com> wrote:

> I wish to do performance testing of 1000 simultaneous read/write
> to the database.

You should definitely be using a connection pool of some sort. Both
your throughput and response time will be better that way. You'll
want to test with different pool sizes, but I've found that a size
which allows the number of active queries in PostgreSQL to be
somewhere around (number_of_cores * 2) + effective_spindle_count to
be near the optimal size.

> My question is:Am I losing something by firing these queries
> directly off the server and should I look at firing the queries
> from different IP address (as it would happen in a web application).

If you run the client side of your test on the database server, the
CPU time used by the client will probably distort your results. I
would try using one separate machine to generate the requests, but
monitor to make sure that the client machine isn't hitting some
bottleneck (like CPU time). If the client is the limiting factor,
you may need to use more than one client machine. No need to use
1000 different client machines. :-)

-Kevin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message jgardner@jonathangardner.net 2010-06-17 02:33:13 Re: PostgreSQL as a local in-memory cache
Previous Message Alvaro Herrera 2010-06-16 20:53:52 Re: requested shared memory size overflows size_t