From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Greg Stark <stark(at)mit(dot)edu>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Vacuum: allow usage of more than 1GB of work mem |
Date: | 2016-09-15 15:19:05 |
Message-ID: | CAD21AoC+Ooju5stkEBCvF5xbOZ=wa1kk-CbYr=KyE-ruwgxpcA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Sep 15, 2016 at 2:40 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On 14 September 2016 at 11:19, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> wrote:
>
>>> In
>>> theory we could even start with the list of TIDs and switch to the
>>> bitmap if the TID list becomes larger than the bitmap would have been,
>>> but I don't know if it's worth the effort.
>>>
>>
>> Yes, that works too. Or may be even better because we already know the
>> bitmap size requirements, definitely for the tuples collected so far. We
>> might need to maintain some more stats to further optimise the
>> representation, but that seems like unnecessary detailing at this point.
>
> That sounds best to me... build the simple representation, but as we
> do maintain stats to show to what extent that set of tuples is
> compressible.
>
> When we hit the limit on memory we can then selectively compress
> chunks to stay within memory, starting with the most compressible
> chunks.
>
> I think we should use the chunking approach Robert suggests, though
> mainly because that allows us to consider how parallel VACUUM should
> work - writing the chunks to shmem. That would also allow us to apply
> a single global limit for vacuum memory rather than an allocation per
> VACUUM.
> We can then scan multiple indexes at once in parallel, all accessing
> the shmem data structure.
>
Yeah, the chunking approach Robert suggested seems like a good idea
but considering implementing parallel vacuum, it would be more
complicated IMO.
I think It's better the multiple process simply allocate memory space
for its process than that the single process allocate huge memory
space using complicated way.
Regards,
--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2016-09-15 15:28:44 | Re: Surprising behaviour of \set AUTOCOMMIT ON |
Previous Message | Tom Lane | 2016-09-15 15:10:28 | Re: Surprising behaviour of \set AUTOCOMMIT ON |