Re: cost based vacuum (parallel)

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Sawada Masahiko <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: cost based vacuum (parallel)
Date: 2019-11-12 09:33:26
Message-ID: CAFiTN-uGuP9nFEP8z32RApahMaBrODB9n+yiM+06K71j83Z2Zg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Nov 12, 2019 at 10:47 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>
> On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> > >
> > > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> > > >
> > > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > > > >
> > > > >
> > > > > Yeah, I think it is difficult to get the exact balance, but we can try
> > > > > to be as close as possible. We can try to play with the threshold and
> > > > > another possibility is to try to sleep in proportion to the amount of
> > > > > I/O done by the worker.
> > > > I have done another experiment where I have done another 2 changes on
> > > > top op patch3
> > > > a) Only reduce the local balance from the total shared balance
> > > > whenever it's applying delay
> > > > b) Compute the delay based on the local balance.
> > > >
> > > > patch4:
> > > > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2
> > > > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2
> > > > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2
> > > > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603
> > > >
> > > > I think with this approach the delay is divided among the worker quite
> > > > well compared to other approaches
> > > >
> > > > >
> > ..
> > > I have tested the same with some other workload(test file attached).
> > > I can see the same behaviour with this workload as well that with the
> > > patch 4 the distribution of the delay is better compared to other
> > > patches i.e. worker with more I/O have more delay and with equal IO
> > > have alsomost equal delay. Only thing is that the total delay with
> > > the patch 4 is slightly less compared to other pacthes.
> > >
> >
> > I see one problem with the formula you have used in the patch, maybe
> > that is causing the value of total delay to go down.
> >
> > - if (new_balance >= VacuumCostLimit)
> > + VacuumCostBalanceLocal += VacuumCostBalance;
> > + if ((new_balance >= VacuumCostLimit) &&
> > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))
> >
> > As per discussion, the second part of the condition should be
> > "VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker". I think
> > you can once change this and try again. Also, please try with the
> > different values of threshold (0.3, 0.5, 0.7, etc.).
> >
> I have modified the patch4 and ran with different values. But, I
> don't see much difference in the values with the patch4. Infact I
> removed the condition for the local balancing check completely still
> the delays are the same, I think this is because with patch4 worker
> are only reducing their own balance and also delaying as much as their
> local balance. So maybe the second condition will not have much
> impact.
>
> Patch4 (test.sh)
> 0
> worker 0 delay=82.380000 total io=17931 hit=17891 miss=0 dirty=2
> worker 1 delay=89.370000 total io=17931 hit=17891 miss=0 dirty=2
> worker 2 delay=89.645000 total io=17931 hit=17891 miss=0 dirty=2
> worker 3 delay=79.150000 total io=16378 hit=4318 miss=0 dirty=603
>
> 0.1
> worker 0 delay=89.295000 total io=17931 hit=17891 miss=0 dirty=2
> worker 1 delay=89.230000 total io=17931 hit=17891 miss=0 dirty=2
> worker 2 delay=89.675000 total io=17931 hit=17891 miss=0 dirty=2
> worker 3 delay=81.840000 total io=16378 hit=4318 miss=0 dirty=603
>
> 0.3
> worker 0 delay=85.915000 total io=17931 hit=17891 miss=0 dirty=2
> worker 1 delay=85.180000 total io=17931 hit=17891 miss=0 dirty=2
> worker 2 delay=88.760000 total io=17931 hit=17891 miss=0 dirty=2
> worker 3 delay=81.975000 total io=16378 hit=4318 miss=0 dirty=603
>
> 0.5
> worker 0 delay=81.635000 total io=17931 hit=17891 miss=0 dirty=2
> worker 1 delay=87.490000 total io=17931 hit=17891 miss=0 dirty=2
> worker 2 delay=89.425000 total io=17931 hit=17891 miss=0 dirty=2
> worker 3 delay=82.050000 total io=16378 hit=4318 miss=0 dirty=603
>
> 0.7
> worker 0 delay=85.185000 total io=17931 hit=17891 miss=0 dirty=2
> worker 1 delay=88.835000 total io=17931 hit=17891 miss=0 dirty=2
> worker 2 delay=86.005000 total io=17931 hit=17891 miss=0 dirty=2
> worker 3 delay=76.160000 total io=16378 hit=4318 miss=0 dirty=603
>
> Patch4 (test1.sh)
> 0
> worker 0 delay=179.005000 total io=35828 hit=35788 miss=0 dirty=2
> worker 1 delay=179.010000 total io=35828 hit=35788 miss=0 dirty=2
> worker 2 delay=179.010000 total io=35828 hit=35788 miss=0 dirty=2
> worker 3 delay=221.900000 total io=44322 hit=8352 miss=1199 dirty=1199
>
> 0.1
> worker 0 delay=177.840000 total io=35828 hit=35788 miss=0 dirty=2
> worker 1 delay=179.465000 total io=35828 hit=35788 miss=0 dirty=2
> worker 2 delay=179.255000 total io=35828 hit=35788 miss=0 dirty=2
> worker 3 delay=222.695000 total io=44322 hit=8352 miss=1199 dirty=1199
>
> 0.3
> worker 0 delay=178.295000 total io=35828 hit=35788 miss=0 dirty=2
> worker 1 delay=178.720000 total io=35828 hit=35788 miss=0 dirty=2
> worker 2 delay=178.270000 total io=35828 hit=35788 miss=0 dirty=2
> worker 3 delay=220.420000 total io=44322 hit=8352 miss=1199 dirty=1199
>
> 0.5
> worker 0 delay=178.415000 total io=35828 hit=35788 miss=0 dirty=2
> worker 1 delay=178.385000 total io=35828 hit=35788 miss=0 dirty=2
> worker 2 delay=173.805000 total io=35828 hit=35788 miss=0 dirty=2
> worker 3 delay=221.605000 total io=44322 hit=8352 miss=1199 dirty=1199
>
> 0.7
> worker 0 delay=175.330000 total io=35828 hit=35788 miss=0 dirty=2
> worker 1 delay=177.890000 total io=35828 hit=35788 miss=0 dirty=2
> worker 2 delay=167.540000 total io=35828 hit=35788 miss=0 dirty=2
> worker 3 delay=216.725000 total io=44322 hit=8352 miss=1199 dirty=1199
>
I have revised the patch4 so that it doesn't depent upon the fix
number of workers, instead I have dynamically updated the worker
count.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
shared_costing_plus_patch4_v1.patch application/octet-stream 6.1 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Fujii Masao 2019-11-12 09:41:12 Re: pg_waldump and PREPARE
Previous Message Dilip Kumar 2019-11-12 09:26:33 Re: [HACKERS] Block level parallel vacuum