Re: Getting rid of AtEOXact Buffers (was Re: [Testperf-general]

From: Jan Wieck <JanWieck(at)Yahoo(dot)com>
To: simon(at)2ndquadrant(dot)com
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, josh(at)agliodbs(dot)com, pgsql-hackers(at)postgresql(dot)org, pgsql-performance(at)postgresql(dot)org
Subject: Re: Getting rid of AtEOXact Buffers (was Re: [Testperf-general]
Date: 2004-10-18 20:36:19
Message-ID: 41742943.4050603@Yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-performance

On 10/17/2004 3:40 PM, simon(at)2ndquadrant(dot)com wrote:

> Seeing as I've missed the last N messages... I'll just reply to this
> one, rather than each of them in turn...
>
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote on 16.10.2004, 18:54:17:
>> I wrote:
>> > Josh Berkus writes:
>> >> First off, two test runs with OProfile are available at:
>> >> http://khack.osdl.org/stp/298124/
>> >> http://khack.osdl.org/stp/298121/
>>
>> > Hmm. The stuff above 1% in the first of these is
>>
>> > Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
>> > samples % app name symbol name
>> > ...
>> > 920369 2.1332 postgres AtEOXact_Buffers
>> > ...
>>
>> > In the second test AtEOXact_Buffers is much lower (down around 0.57
>> > percent) but the other suspects are similar. Since the only difference
>> > in parameters is shared_buffers (36000 vs 9000), it does look like we
>> > are approaching the point where AtEOXact_Buffers is a problem, but so
>> > far it's only a 2% drag.
>
> Yes... as soon as you first mentioned AtEOXact_Buffers, I realised I'd
> seen it near the top of the oprofile results on previous tests.
>
> Although you don't say this, I presume you're acting on the thought that
> a 2% drag would soon become a much larger contention point with more
> users and/or smaller transactions - since these things are highly
> non-linear.
>
>>
>> It occurs to me that given the 8.0 resource manager mechanism, we could
>> in fact dispense with AtEOXact_Buffers, or perhaps better turn it into a
>> no-op unless #ifdef USE_ASSERT_CHECKING. We'd just get rid of the
>> special case for transaction termination in resowner.c and let the
>> resource owner be responsible for releasing locked buffers always. The
>> OSDL results suggest that this won't matter much at the level of 10000
>> or so shared buffers, but for 100000 or more buffers the linear scan in
>> AtEOXact_Buffers is going to become a problem.
>
> If the resource owner is always responsible for releasing locked
> buffers, who releases the locks if the backend crashes? Do we need some
> additional code in bgwriter (or?) to clean up buffer locks?

If the backend crashes, the postmaster (assuming a possibly corrupted
shared memory) restarts the whole lot ... so why bother?

>
>>
>> We could also get rid of the linear search in UnlockBuffers(). The only
>> thing it's for anymore is to release a BM_PIN_COUNT_WAITER flag, and
>> since a backend could not be doing more than one of those at a time,
>> we don't really need an array of flags for that, only a single variable.
>> This does not show in the OSDL results, which I presume means that their
>> test case is not exercising transaction aborts; but I think we need to
>> zap both routines to make the world safe for large shared_buffers
>> values. (See also
>> http://archives.postgresql.org/pgsql-performance/2004-10/msg00218.php)
>
> Yes, that's important.
>
>> Any objection to doing this for 8.0?
>>
>
> As you say, if these issues are definitely kicking in at 100000
> shared_buffers - there's a good few people out there with 800Mb
> shared_buffers already.
>
> Could I also suggest that we adopt your earlier suggestion of raising
> the bgwriter parameters as a permanent measure - i.e. changing the
> defaults in postgresql.conf. That way, StrategyDirtyBufferList won't
> immediately show itself as a problem when using the default parameter
> set. It would be a shame to remove one obstacle only to leave another
> one following so close behind. [...and that also argues against an
> earlier thought to introduce more fine grained values for the
> bgwriter's parameters, ISTM]

I realized that StrategyDirtyBufferList currently wasts a lot of time by
first scanning over all the buffers that haven't even been hit since
it's last call and neither have been dirty last time (and thus, are at
the beginning of the list and can't be dirty anyway). If we would have a
way to give it a smart "point in the list to start scanning" ...

>
> Also, what will Vacuum delay do to the O(N) effect of
> FlushRelationBuffers when called by VACUUM? Will the locks be held for
> longer?

Vacuum only naps at the points where it checks for interrupts, and at
that time it isn't supposed to hold any critical locks.

Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2004-10-18 21:00:45 Re: [Testperf-general] Re: First set of OSDL Shared Memscalability results, some wierdness ...
Previous Message Tom Lane 2004-10-18 19:40:36 Re: Nearing final release?

Browse pgsql-performance by date

  From Date Subject
Next Message Josh Berkus 2004-10-18 21:00:45 Re: [Testperf-general] Re: First set of OSDL Shared Memscalability results, some wierdness ...
Previous Message Josh Berkus 2004-10-18 19:45:10 Re: Indexes performance