Re: Speed up Clog Access by increasing CLOG buffers

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Jesper Pedersen <jesper(dot)pedersen(at)redhat(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Speed up Clog Access by increasing CLOG buffers
Date: 2015-11-17 11:48:20
Message-ID: CAA4eK1+pgGLNuumZo6swNZGd1_=Sfve0fuT58JQ-KpYKF4064A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Nov 17, 2015 at 5:04 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:

> On 17 November 2015 at 11:27, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> We are trying to speed up real cases, not just benchmarks.
>>>
>>> So +1 for the concept, patch is going in right direction though lets do
>>> the full press-up.
>>>
>>>
>> I have mentioned above the reason for not doing it for sub transactions,
>> if
>> you think it is viable to reserve space in shared memory for this
>> purpose, then
>> I can include the optimization for subtransactions as well.
>>
>
> The number of subxids is unbounded, so as you say, reserving shmem isn't
> viable.
>
> I'm interested in real world cases, so allocating 65 xids per process
> isn't needed, but we can say is that the optimization shouldn't break down
> abruptly in the presence of a small/reasonable number of subtransactions.
>
>
I think in that case what we can do is if the total number of
sub transactions is lesser than equal to 64 (we can find that by
overflowed flag in PGXact) , then apply this optimisation, else use
the existing flow to update the transaction status. I think for that we
don't even need to reserve any additional memory. Does that sound
sensible to you?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bert 2015-11-17 11:52:09 Re: Parallel Seq Scan
Previous Message Bert 2015-11-17 11:38:17 Re: Parallel Seq Scan