Skip site navigation (1) Skip section navigation (2)

Re: Question on pgbench output

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: David Kerr <dmk(at)mr-paradox(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Question on pgbench output
Date: 2009-04-03 21:43:13
Message-ID: dcc563d10904031443p228df80ey12e8ef7c106e2941@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On Fri, Apr 3, 2009 at 1:53 PM, David Kerr <dmk(at)mr-paradox(dot)net> wrote:
> Here is my transaction file:
> \setrandom iid 1 50000
> BEGIN;
> SELECT content FROM test WHERE item_id = :iid;
> END;
>
> and then i executed:
> pgbench -c 400 -t 50 -f trans.sql -l
>
> The results actually have surprised me, the database isn't really tuned
> and i'm not working on great hardware. But still I'm getting:
>
> caling factor: 1
> number of clients: 400
> number of transactions per client: 50
> number of transactions actually processed: 20000/20000
> tps = 51.086001 (including connections establishing)
> tps = 51.395364 (excluding connections establishing)

Not bad.  With an average record size of 1.2Meg you're reading ~60 Meg
per second (plus overhead) off of your drive(s).

> So the question is - Can anyone see a flaw in my test so far?
> (considering that i'm just focused on the performance of pulling
> the 1.2M record from the table) and if so any suggestions to further
> nail it down?

You can either get more memory (enough to hold your whole dataset in
ram), get faster drives and aggregate them with RAID-10, or look into
something like memcached servers, which can cache db queries for your
app layer.

In response to

pgsql-performance by date

Next:From: Greg SmithDate: 2009-04-03 21:51:50
Subject: Re: Question on pgbench output
Previous:From: David KerrDate: 2009-04-03 21:18:21
Subject: Re: Question on pgbench output

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group