Skip site navigation (1) Skip section navigation (2)

Re: pgbench could not send data to client: Broken pipe

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Greg Smith" <greg(at)2ndquadrant(dot)com>
Cc: "David Kerr" <dmk(at)mr-paradox(dot)net>, <pgsql-performance(at)postgresql(dot)org>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: pgbench could not send data to client: Broken pipe
Date: 2010-09-09 17:24:17
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> Kevin Grittner wrote:
>> Of course, the only way to really know some of these numbers is
>> to test your actual application on the real hardware under
>> realistic load; but sometimes you can get a reasonable
>> approximation from early tests or "gut feel" based on experience
>> with similar applications.
> And that latter part only works if your gut is as accurate as
> Kevin's.  For most people, even a rough direct measurement is much
> more useful than any estimate.
:-)  Indeed, when I talk about "'gut feel' based on experience with
similar applications" I'm think of something like, "When I had a
query with the same number of joins against tables about this size
with the same number and types of key columns, metrics showed that
it took n ms and was CPU bound, and this new CPU and RAM hardware
benchmarks twice as fast, so I'll ballpark this at 2/3 the runtime
as a gut feel, and follow up with measurements as soon as
practical."  That may not have been entirely clear....
> So the incoming query in this not completely contrived case (I
> just picked the numbers to make the math even) takes the same
> amount of time to deliver a result either way.
I'm gonna quibble with you here.  Even if it gets done with the last
request at the same time either way (which discounts the very real
contention and context switch costs), if you release the thundering
herd of requests all at once they will all finish at about the same
time as that last request, while a queue allows a stream of
responses throughout.  Since results start coming back almost
immediately, and stream through evenly, your *average response time*
is nearly cut in half with the queue.  And that's without figuring
the network congestion issues of having all those requests complete
at the same time.
In my experience you can expect the response time benefit of
reducing the size of your connection pool to match available
resources to be more noticeable than the throughput improvements. 
This directly contradicts many people's intuition, revealing the
downside of "gut feel".

In response to


pgsql-performance by date

Next:From: Greg SmithDate: 2010-09-09 18:07:22
Subject: Re: pgbench could not send data to client: Broken pipe
Previous:From: Greg SmithDate: 2010-09-09 16:05:25
Subject: Re: pgbench could not send data to client: Broken pipe

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group