Re: Postgres is not able to handle more than 4k tables!?

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: Stephen Frost <sfrost(at)snowman(dot)net>, Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Postgres is not able to handle more than 4k tables!?
Date: 2020-07-10 07:24:25
Message-ID: e36261772d03026c6adb24ea3c864ff267cbadcf.camel@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, 2020-07-09 at 12:47 -0400, Stephen Frost wrote:
> I realize this is likely to go over like a lead balloon, but the churn
> in pg_class from updating reltuples/relpages has never seemed all that
> great to me when just about everything else is so rarely changed, and
> only through some user DDL action- and I agree that it seems like those
> particular columns are more 'statistics' type of info and less info
> about the definition of the relation. Other columns that do get changed
> regularly are relfrozenxid and relminmxid. I wonder if it's possible to
> move all of those elsewhere- perhaps some to the statistics tables as
> you seem to be alluding to, and the others to $somewhereelse that is
> dedicated to tracking that information which VACUUM is primarily
> concerned with.

Perhaps we could create pg_class with a fillfactor less than 100
so we het HOT updates there.
That would be less invasive.

Yours,
Laurenz Albe

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Kasahara Tatsuhito 2020-07-10 07:55:41 Re: Retry Cached Remote Connections for postgres_fdw in case remote backend gets killed/goes away
Previous Message Konstantin Knizhnik 2020-07-10 07:18:36 Re: Postgres is not able to handle more than 4k tables!?