|From:||Stephen Frost <sfrost(at)snowman(dot)net>|
|To:||Robert Haas <robertmhaas(at)gmail(dot)com>|
|Cc:||Bruce Momjian <bruce(at)momjian(dot)us>, Magnus Hagander <magnus(at)hagander(dot)net>, Noah Misch <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: where should I stick that backup?|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> On Thu, Apr 9, 2020 at 6:44 PM Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> > Good point, but if there are multiple APIs, it makes shell script
> > flexibility even more useful.
> This is really the key point for me. There are so many existing tools
> that store a file someplace that we really can't ever hope to support
> them all in core, or even to have well-written extensions that support
> them all available on PGXN or wherever. We need to integrate with the
> tools that other people have created, not try to reinvent them all in
So, this goes to what I was just mentioning to Bruce independently- you
could have made the same argument about FDWs, but it just doesn't
actually hold any water. Sure, some of the FDWs aren't great, but
there's certainly no shortage of them, and the ones that are
particularly important (like postgres_fdw) are well written and in core.
> Now what I understand Stephen to be saying is that a lot of those
> tools actually suck, and I think that's a completely valid point. But
> I also think that it's unwise to decide that such problems are our
> problems rather than problems with those tools. That's a hole with no
I don't really think 'bzip2' sucks as a tool, or that bash does. They
weren't designed or intended to meet the expectations that we have for
data durability though, which is why relying on them for exactly that
ends up being a bad recipe.
> One thing I do think would be realistic would be to invent a set of
> tools that are perform certain local filesystem operations in a
> "hardened" way. Maybe a single tool with subcommands and options. So
> you could say, e.g. 'pgfile cp SOURCE TARGET' and it would create a
> temporary file in the target directory, write the contents of the
> source into that file, fsync the file, rename it into place, and do
> more fsyncs to make sure it's all durable in case of a crash. You
> could have a variant of this that instead of using the temporary file
> and rename in place approach, does the thing where you open the target
> file with O_CREAT|O_EXCL, writes the bytes, and then closes and fsyncs
> it. And you could have other things too, like 'pgfile mkdir DIR' to
> create a directory and fsync it for durability. A toolset like this
> would probably help people write better archive commands - it would
> certainly been an improvement over what we have now, anyway, and it
> could also be used with the feature that I proposed upthread.
This argument leads in a direction to justify anything as being sensible
to implement using shell scripts. If we're open to writing the shell
level tools that would be needed, we could reimplement all of our
indexes that way, or FDWs, or TDE, or just about anything else.
What we would end up with though is that we'd have more complications
changing those interfaces because people will be using those tools, and
maybe those tools don't get updated at the same time as PG does, and
maybe there's critical changes that need to be made in back branches and
we can't really do that with these interfaces.
> It is of course not impossible to teach pg_basebackup to do all of
> that stuff internally, but I have a really difficult time imagining us
> ever getting it done. There are just too many possibilities, and new
> ones arise all the time.
I agree that it's certainly a fair bit of work, but it can be
accomplished incrementally and, with a good design, allow for adding in
new options in the future with relative ease. Now is the time to
discuss what that design looks like, think about how we can implement it
in a way that all of the tools we have are able to work together, and
have them all support and be tested together with these different
The concerns about there being too many possibilities and new ones
coming up all the time could be applied equally to FDWs, but rather than
ending up with a dearth of options and external solutions there, what
we've actually seen is an explosion of options and externally written
libraries for a large variety of options.
> A 'pgfile' utility wouldn't help at all for people who are storing to
> S3 or whatever. They could use 'aws s3' as a target for --pipe-output,
> but if it turns out that said tool is insufficiently robust in terms
> of overwriting files or doing fsyncs or whatever, then they might have
> problems. Now, Stephen or anyone else could choose to provide
> alternative tools with more robust behavior, and that would be great.
> But even if he didn't, people could take their chances with what's
> already out there. To me, that's a good thing. Yeah, maybe they'll do
> dumb things that don't work, but realistically, they can do dumb stuff
> without the proposed option too.
How does this solution give them a good way to do the right thing
though? In a way that will work with large databases and complex
requirements? The answer seems to be "well, everyone will have to write
their own tool to do that" and that basically means that, at best, we're
only providing half of a solution and expecting all of our users to
provide the other half, and to always do it correctly and in a well
written way. Acknowledging that most users aren't going to actually do
that and instead they'll implement half measures that aren't reliable
shouldn't be seen as an endorsement of this approach.
|Next Message||Robert Haas||2020-04-10 15:04:14||pg_validatebackup -> pg_verifybackup?|
|Previous Message||Yugo NAGATA||2020-04-10 14:26:58||Re: Implementing Incremental View Maintenance|