Re: block-level incremental backup

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: block-level incremental backup
Date: 2019-04-11 04:22:28
Message-ID: 20190411042228.GO2728@paquier.xyz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Apr 10, 2019 at 09:42:47PM +0200, Peter Eisentraut wrote:
> That is a great analysis. Seems like block-level is the preferred way
> forward.

In any solution related to incremental backups I have see from
community, all of them tend to prefer block-level backups per the
filtering which is possible based on the LSN of the page header. The
holes in the middle of the page are also easier to handle so as an
incremental page size is reduced in the actual backup. My preference
tends toward a block-level approach if we were to do something in this
area, though I fear that performance will be bad if we begin to scan
all the relation files to fetch a set of blocks since a past LSN.
Hence we need some kind of LSN map so as it is possible to skip a
one block or a group of blocks (say one LSN every 8/16 blocks for
example) at once for a given relation if the relation is mostly
read-only.
--
Michael

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2019-04-11 04:26:11 Re: Proper usage of ndistinct vs. dependencies extended statistics
Previous Message Michael Paquier 2019-04-11 04:14:46 Re: pg_rewind vs superuser