From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PATCH] Incremental backup: add backup profile to base backup |
Date: | 2014-08-18 16:33:37 |
Message-ID: | 20140818163337.GA6817@eldon.alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Heikki Linnakangas wrote:
> On 08/18/2014 08:05 AM, Alvaro Herrera wrote:
> >We already have the FNV checksum implementation in the backend -- can't
> >we use that one for this and avoid messing with MD5?
> >
> >(I don't think we're looking for a cryptographic hash here. Am I wrong?)
>
> Hmm. Any user that can update a table can craft such an update that
> its checksum matches an older backup. That may seem like an onerous
> task; to correctly calculate the checksum of a file in a previous,
> you need to know the LSNs and the exact data, including deleted
> data, on every block in the table, and then construct a suitable
> INSERT or UPDATE that modifies the table such that you get a
> collision. But for some tables it could be trivial; you might know
> that a table was bulk-loaded with a particular LSN and there are no
> dead tuples.
What would anybody obtain by doing that? The only benefit is that the
file you so carefully crafted is not included in the next incremental
backup. How is this of any interest?
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-08-18 16:33:44 | Re: pg_shmem_allocations view |
Previous Message | Andres Freund | 2014-08-18 16:30:23 | Re: pg_shmem_allocations view |