Re: Freeze avoidance of very large table.

From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Sawada Masahiko <sawada(dot)mshk(at)gmail(dot)com>, Greg Stark <stark(at)mit(dot)edu>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Subject: Re: Freeze avoidance of very large table.
Date: 2015-04-21 20:27:58
Message-ID: 20150421202758.GN14483@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2015-04-21 16:21:47 -0400, Robert Haas wrote:
> All that having been said, I don't think adding a new fork is a good
> approach. We already have problems pretty commonly where our
> customers complain about running out of inodes. Adding another fork
> for every table would exacerbate that problem considerably.

Really? These days? There's good arguments against another fork
(increased number of fsyncs, more stat calls, increased number of file
handles, more WAL logging, ...), but the number of inodes themselves
seems like something halfway recent filesystems should handle.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2015-04-21 20:32:45 Re: Freeze avoidance of very large table.
Previous Message Robert Haas 2015-04-21 20:26:08 Re: Replication identifiers, take 4