Re: Question on pgbench output

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: David Kerr <dmk(at)mr-paradox(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Question on pgbench output
Date: 2009-04-03 21:43:13
Message-ID: dcc563d10904031443p228df80ey12e8ef7c106e2941@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Apr 3, 2009 at 1:53 PM, David Kerr <dmk(at)mr-paradox(dot)net> wrote:
> Here is my transaction file:
> \setrandom iid 1 50000
> BEGIN;
> SELECT content FROM test WHERE item_id = :iid;
> END;
>
> and then i executed:
> pgbench -c 400 -t 50 -f trans.sql -l
>
> The results actually have surprised me, the database isn't really tuned
> and i'm not working on great hardware. But still I'm getting:
>
> caling factor: 1
> number of clients: 400
> number of transactions per client: 50
> number of transactions actually processed: 20000/20000
> tps = 51.086001 (including connections establishing)
> tps = 51.395364 (excluding connections establishing)

Not bad. With an average record size of 1.2Meg you're reading ~60 Meg
per second (plus overhead) off of your drive(s).

> So the question is - Can anyone see a flaw in my test so far?
> (considering that i'm just focused on the performance of pulling
> the 1.2M record from the table) and if so any suggestions to further
> nail it down?

You can either get more memory (enough to hold your whole dataset in
ram), get faster drives and aggregate them with RAID-10, or look into
something like memcached servers, which can cache db queries for your
app layer.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Smith 2009-04-03 21:51:50 Re: Question on pgbench output
Previous Message David Kerr 2009-04-03 21:18:21 Re: Question on pgbench output