Re: shared-memory based stats collector

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: shared-memory based stats collector
Date: 2018-07-02 18:25:58
Message-ID: CA+TgmoYQhr30eAcgJCi1v0FhA+3RP1FZVnXqSTLe=6fHy9e5oA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Jun 29, 2018 at 4:34 AM, Kyotaro HORIGUCHI
<horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
> Nowadays PostgreSQL has dynamic shared hash (dshash) so we can
> use this as the main storage of statistics. We can share data
> without a stress using this.
>
> A PoC previously posted tried to use "locally copied" dshash but
> it doesn't looks fine so I steered to different direction.
>
> With this patch dshash can create a local copy based on dynhash.

Copying the whole hash table kinds of sucks, partly because of the
time it will take to copy it, but also because it means that memory
usage is still O(nbackends * ntables). Without looking at the patch,
I'm guessing that you're doing that because we need a way to show each
transaction a consistent snapshot of the data, and I admit that I
don't see another obvious way to tackle that problem. Still, it would
be nice if we had a better idea.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2018-07-02 18:27:20 Re: Add --include-table-data-where option to pg_dump, to export only a subset of table data
Previous Message Andrew Gierth 2018-07-02 18:23:25 Re: Should contrib modules install .h files?