Re: shared memory stats: high level design decisions: consistency, dropping

From: Hannu Krosing <hannuk(at)google(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Subject: Re: shared memory stats: high level design decisions: consistency, dropping
Date: 2021-03-20 00:31:13
Message-ID: CAMT0RQQbu9UdBkugBXarfhtEmxeMwTUsGTUNybi0j=JG_EbTMg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Mar 20, 2021 at 1:21 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2021-03-20 01:16:31 +0100, Hannu Krosing wrote:
> > > But now we could instead schedule stats to be removed at commit
> > time. That's not trivial of course, as we'd need to handle cases where
> > the commit fails after the commit record, but before processing the
> > dropped stats.
> >
> > We likely can not remove them at commit time, but only after the
> > oldest open snapshot moves parts that commit ?
>
> I don't see why? A dropped table is dropped, and cannot be accessed
> anymore. Snapshots don't play a role here - the underlying data is gone
> (minus a placeholder file to avoid reusing the oid, until the next
> commit). If you run a vacuum on some unrelated table in the same
> database, the stats for a dropped table will already be removed long
> before there's no relation that could theoretically open the table.
>
> Note that table level locking would prevent a table from being dropped
> if a long-running transaction has already accessed it.

Yeah, just checked. DROP TABLE waits until the reading transaction finishes.

>
> > Would an approach where we keep stats in a structure logically similar
> > to MVCC we use for normal tables be completely unfeasible ?
>
> Yes, pretty unfeasible. Stats should work on standbys too...

I did not mean actually using MVCC and real transaction ids but rather
similar approach, where (potentially) different stats rows are kept
for each backend.

This of course only is a win in case multiple backends can use the
same stats row. Else it is easier to copy the backends version into
backend local memory.

But I myself do not see any problem with stats rows changing all the time.

The only worry would be parts of the same row being out of sync. This
can of course be solved by locking, but for large number of backends
with tiny transactions this locking itself could potentially become a
problem. Here alternating between two or more versions could help and
then it also starts makes sense to keep the copies in shared memory

> Regards,
>
> Andres

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Ajin Cherian 2021-03-20 01:37:22 Re: [HACKERS] logical decoding of two-phase transactions
Previous Message David Steele 2021-03-20 00:30:54 Re: fdatasync performance problem with large number of DB files