Re: Optimizing around retained tuples

From: Brad DeJong <Brad(dot)Dejong(at)infor(dot)com>
To: Claudio Freire <klaussfreire(at)gmail(dot)com>, James Parks <james(dot)parks(at)meraki(dot)net>
Cc: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Optimizing around retained tuples
Date: 2017-03-24 16:00:03
Views: Raw Message | Whole Thread | Download mbox
Lists: pgsql-performance

On Tue, Mar 21, 2017 at 4:24 PM, James Parks <james(dot)parks(at)meraki(dot)net> wrote:
> What can I do to keep running long maintenance operations on large
> tables (SELECTing significant fractions of B, DELETEing significant
> fractions of B, running VACUUM FULL on B) without denying other
> Postgresql backends their ability to efficiently query table A?
> Anything is on the table for implementation:
> - moving tables to a different database / cluster / completely different DBMS system
> - designing an extension to tune either sets of queries
> - partitioning tables
> - etc

The PostgreSQL 9.6 old_snapshot_threshold feature may be useful for this situation.

From the patch proposal e-mail "... Basically, this patch aims to limit bloat when there are snapshots
that are kept registered for prolonged periods. ...".

I think that matches your description.

PgCon 2016 presentation -
CommitFest entry -

On Tue, Mar 21, 2017 at 10:56 PM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
> You're experiencing bloat because the transaction on B is preventing
> the xid horizon from moving forward, thus dead tuples from A cannot be
> reclaimed in case the transaction on B decides to query them.

Setting old_snapshot_threshold to a positive value changes that behavior.

Instead of holding on to the "dead" tuples in A so that the transaction
on B can query them in the future, the tuples are vaccuumed and the
transaction on B gets a "snapshot too old" error if it tries to read a
page in A where a tuple was vaccuumed.

There are also discussions on pgsql-hackers ("pluggable storage" and "UNDO
and in-place update") regarding alternate table formats that might work
better in this situation. But it doesn't look like either of those will
make it into PostgreSQL 10.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2017-03-24 19:47:29 Re: Postgres not using all RAM (Huge Page activated on a 96GB RAM system)
Previous Message Andrew Kerber 2017-03-24 14:00:23 Re: Postgres not using all RAM (Huge Page activated on a 96GB RAM system)