Re: Proposal of tunable fix for scalability of 8.4

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>, "Scott Carey" <scott(at)richrelevance(dot)com>, "Jignesh K(dot) Shah" <J(dot)K(dot)Shah(at)Sun(dot)COM>
Subject: Re: Proposal of tunable fix for scalability of 8.4
Date: 2009-03-12 15:13:24
Message-ID: 49B8E044.EE98.0025.0@wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

>>> Scott Carey <scott(at)richrelevance(dot)com> wrote:
> "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>
>> I'm a lot more interested in what's happening between 60 and 180
>> than over 1000, personally. If there was a RAID involved, I'd put
>> it down to better use of the numerous spindles, but when it's all
>> in RAM it makes no sense.
>
> If there is enough lock contention and a common lock case is a short
> lived shared lock, it makes perfect sense sense. Fewer readers are
> blocked waiting on writers at any given time. Readers can 'cut' in
> line ahead of writers within a certain scope (only up to the number
> waiting at the time a shared lock is at the head of the queue).
> Essentially this clumps up shared and exclusive locks into larger
> streaks, and allows for higher shared lock throughput.

You misunderstood me. I wasn't addressing the affects of his change,
but rather the fact that his test shows a linear improvement in TPS up
to 1000 connections for a 64 thread machine which is dealing entirely
with RAM -- no disk access. Where's the bottleneck that allows this
to happen? Without understanding that, his results are meaningless.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Grzegorz Jaśkiewicz 2009-03-12 15:31:28 Re: Proposal of tunable fix for scalability of 8.4
Previous Message Jignesh K. Shah 2009-03-12 14:57:04 Re: Proposal of tunable fix for scalability of 8.4