Re: pg_amcheck contrib application

From: Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, Stephen Frost <sfrost(at)snowman(dot)net>, Michael Paquier <michael(at)paquier(dot)xyz>, Amul Sul <sulamul(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_amcheck contrib application
Date: 2021-03-10 16:10:02
Message-ID: BC7728FA-C21D-4E25-A266-E94873637354@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Robert, Peter, in response to your review comments spanning multiple emails:

> On Mar 4, 2021, at 7:29 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> Most of these changes sound good. I'll go through the whole patch
> again today, or as much of it as I can. But before I do that, I want
> to comment on this point specifically.
>
> On Thu, Mar 4, 2021 at 1:25 AM Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com> wrote:
>> I think this is fixed up now. There is an interaction with amcheck's verify_heapam(), where that function raises an error if the startblock or endblock arguments are out of bounds for the relation in question. Rather than aborting the entire pg_amcheck run, it avoids passing inappropriate block ranges to verify_heapam() and outputs a warning, so:
>>
>> % pg_amcheck mark.dilger -t foo -t pg_class --progress -v --startblock=35 --endblock=77
>> pg_amcheck: in database "mark.dilger": using amcheck version "1.3" in schema "public"
>> 0/6 relations (0%) 0/55 pages (0%)
>> pg_amcheck: checking table "mark.dilger"."public"."foo" (oid 16385) (10/45 pages)
>> pg_amcheck: warning: ignoring endblock option 77 beyond end of table "mark.dilger"."public"."foo"
>> pg_amcheck: checking btree index "mark.dilger"."public"."foo_idx" (oid 16388) (30/30 pages)
>> pg_amcheck: checking table "mark.dilger"."pg_catalog"."pg_class" (oid 1259) (0/13 pages)
>> pg_amcheck: warning: ignoring startblock option 35 beyond end of table "mark.dilger"."pg_catalog"."pg_class"
>> pg_amcheck: warning: ignoring endblock option 77 beyond end of table "mark.dilger"."pg_catalog"."pg_class"
>> pg_amcheck: checking btree index "mark.dilger"."pg_catalog"."pg_class_relname_nsp_index" (oid 2663) (6/6 pages)
>> pg_amcheck: checking btree index "mark.dilger"."pg_catalog"."pg_class_tblspc_relfilenode_index" (oid 3455) (5/5 pages)
>> pg_amcheck: checking btree index "mark.dilger"."pg_catalog"."pg_class_oid_index" (oid 2662) (4/4 pages)
>> 6/6 relations (100%) 55/55 pages (100%)
>>
>> The way the (x/y pages) is printed takes into account that the [startblock..endblock] range may reduce the number of pages to check (x) to something less than the number of pages in the relation (y), but the reporting is a bit of a lie when the startblock is beyond the end of the table, as it doesn't get passed to verify_heapam and so the number of blocks checked may be more than the zero blocks reported. I think I might need to fix this up tomorrow, but I want to get what I have in this patch set posted tonight, so it's not fixed here. Also, there are multiple ways of addressing this, and I'm having trouble deciding which way is best. I can exclude the relation from being checked at all, or realize earlier that I'm not going to honor the startblock argument and compute the blocks to check correctly. Thoughts?
>
> I think this whole approach is pretty suspect because the number of
> blocks in the relation can increase (by relation extension) or
> decrease (by VACUUM or TRUNCATE) between the time when we query for
> the list of target relations and the time we get around to executing
> any queries against them. I think it's OK to use the number of
> relation pages for progress reporting because progress reporting is
> only approximate anyway,

Fair point.

> but I wouldn't print them out in the progress
> messages,

Removed.

> and I wouldn't try to fix up the startblock and endblock
> arguments on the basis of how long you think that relation is going to
> be.

Yeah, in light of a new day, that seems like a bad idea to me, too. Removed.

> You seem to view the fact that the server reported the error as
> the reason for the problem, but I don't agree. I think having the
> server report the error here is right, and the problem is that the
> error reporting sucked because it was long-winded and didn't
> necessarily tell you which table had the problem.

No, I was thinking about a different usage pattern, but I've answer that already elsewhere on this thread.

> There are a LOT of things that can go wrong when we go try to run
> verify_heapam on a table. The table might have been dropped; in fact,
> on a busy production system, such cases are likely to occur routinely
> if DDL is common, which for many users it is. The system catalog
> entries might be screwed up, so that the relation can't be opened.
> There might be an unreadable page in the relation, either because the
> OS reports an I/O error or something like that, or because checksum
> verification fails. There are various other possibilities. We
> shouldn't view such errors as low-level things that occur only in
> fringe cases; this is a corruption-checking tool, and we should expect
> that running it against messed-up databases will be common. We
> shouldn't try to interpret the errors we get or make any big decisions
> about them, but we should have a clear way of reporting them so that
> the user can decide what to do.

Once again, I think you are right and have removed the objectionable behavior, but....

The --startblock and --endblock options make the most sense when the user is only checking one table, like

pg_amcheck --startblock=17 --endblock=19 --table=my_schema.my_corrupt_table

because the user likely has some knowledge about that table, perhaps from a prior run of pg_amcheck. The --startblock and --endblock arguments are a bit strange when used globally, as relations don't all have the same number of blocks, so

pg_amcheck --startblock=17 --endblock=19 mydb

will very likely emit lots of error messages for tables which don't have blocks in that range. That's not entirely pg_amcheck's fault, as it just did what the user asked, but it also doesn't seem super helpful. I'm not going to do anything about it in this release.

> Just as an experiment, I suggest creating a database with 100 tables
> in it, each with 1 index, and then deleting a single pg_attribute
> entry for 10 of the tables, and then running pg_amcheck. I think you
> will get 20 errors - one for each messed-up table and one for the
> corresponding index. Maybe you'll get errors for the TOAST tables
> checks too, if the tables have TOAST tables, although that seems like
> it should be avoidable. Now, now matter what you do, the tool is going
> to produce a lot of output here, because you have a lot of problems,
> and that's OK. But how understandable is that output, and how concise
> is it? If it says something like:
>
> pg_amcheck: could not check "SCHEMA_NAME"."TABLE_NAME": ERROR: some
> attributes are missing or something
>
> ...and that line is repeated 20 times, maybe with a context or detail
> line for each one or something like that, then you have got a good UI.
> If it's not clear which tables have the problem, you have got a bad
> UI. If it dumps out 300 lines of output instead of 20 or 40, you have
> a UI that is so verbose that usability is going to be somewhat
> impaired, which is why I suggested only showing the query in verbose
> mode.

After running 'make installcheck', if I delete all entries from pg_class where relnamespace = 'pg_toast'::regclass, by running 'pg_amcheck regression', I get lines that look like this:

heap relation "regression"."public"."quad_poly_tbl":
ERROR: could not open relation with OID 17177
heap relation "regression"."public"."gin_test_tbl":
ERROR: could not open relation with OID 24793
heap relation "regression"."pg_catalog"."pg_depend":
ERROR: could not open relation with OID 8888
heap relation "regression"."public"."spgist_text_tbl":
ERROR: could not open relation with OID 25624

which seems ok.

If instead I delete pg_attribute entries, as you suggest above, I get rows like this:

heap relation "regression"."regress_rls_schema"."rls_tbl":
ERROR: catalog is missing 1 attribute(s) for relid 26467
heap relation "regression"."regress_rls_schema"."rls_tbl_force":
ERROR: catalog is missing 1 attribute(s) for relid 26474

which also seems ok.

If instead, I manually corrupt relation files belonging to the regression database, I get lines that look like this for corrupt heap relations:

relation "regression"."public"."functional_dependencies", block 28, offset 54, attribute 0
attribute 0 with length 4294967295 ends at offset 50 beyond total tuple length 43
relation "regression"."public"."functional_dependencies", block 28, offset 55
multitransaction ID is invalid
relation "regression"."public"."functional_dependencies", block 28, offset 57
multitransaction ID is invalid

and for corrupt btree relations:

btree relation "regression"."public"."tenk1_unique1":
ERROR: high key invariant violated for index "tenk1_unique1"
DETAIL: Index tid=(1,38) points to heap tid=(70,26) page lsn=0/33A96D0.
btree relation "regression"."public"."tenk1_unique2":
ERROR: index tuple size does not equal lp_len in index "tenk1_unique2"
DETAIL: Index tid=(1,35) tuple size=4913 lp_len=16 page lsn=0/33DFD98.
HINT: This could be a torn page problem.
btree relation "regression"."public"."tenk1_thous_tenthous":
ERROR: index tuple size does not equal lp_len in index "tenk1_thous_tenthous"
DETAIL: Index tid=(1,36) tuple size=4402 lp_len=16 page lsn=0/34C0770.
HINT: This could be a torn page problem.

which likewise seems ok.

> BTW, another thing that might be interesting is to call
> PQsetErrorVerbosity(conn, PQERRORS_VERBOSE) in verbose mode. It's
> probably possible to contrive a case where the server error message is
> something generic like "cache lookup failed for relation %u" which
> occurs in a whole bunch of places in the source code, and being able
> get the file and line number information can be really useful when
> trying to track such things down.

Good idea. I decided to also honor the --quiet flag

if (opts.verbose)
PQsetErrorVerbosity(free_slot->connection, PQERRORS_VERBOSE);
else if (opts.quiet)
PQsetErrorVerbosity(free_slot->connection, PQERRORS_TERSE);

> On Mar 4, 2021, at 2:04 PM, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
>
> I don't think that the struct AmcheckOptions block fields (e.g.,
> startblock) should be of type 'long' -- that doesn't work well on
> Windows, where 'long' is only 32-bit. To be fair we already do the
> same thing elsewhere, but there is no reason to repeat those mistakes.
> (I'm rather suspicious of 'long' in general.)
>
> I think that you could use BlockNumber + strtoul() without breaking Windows.

Thanks for reviewing!

Good points. I decided to use int64 instead of BlockNumber. The option processing needs to give a sensible error message if the user gives a negative number for the argument, so unsigned types are a bad fit.

> On Thu, Mar 4, 2021 at 10:29 AM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Most of these changes sound good. I'll go through the whole patch
>> again today, or as much of it as I can. But before I do that, I want
>> to comment on this point specifically.
>
> Just a thought - I don't feel strongly about this - but you want want
> to consider storing your list of patterns in an array that gets
> resized as necessary rather than a list. Then the pattern ID would
> just be pattern_ptr - pattern_array, and finding the pattern by ID
> would just be pattern_ptr = &pattern_array[pattern_id]. I don't think
> there's a real efficiency issue here because the list of patterns is
> almost always going to be short, and even if somebody decides to
> provide a very long list of patterns (e.g. by using xargs) it's
> probably still not that big a deal. A sufficiently obstinate user
> running an operating system where argument lists can be extremely long
> could probably make this the dominant cost by providing a gigantic
> number of patterns that don't match anything, but such a person is
> trying to prove a point, rather than accomplish anything useful, so I
> don't care. But, the code might be more elegant the other way.

Done. I was not too motivated by the efficiency argument, but the code to look up patterns is cleaner when the pattern_id is an index into an array than when it is a field in a struct that has to be searched in a list.

> This patch increases the number of cases where we use ^ to assert that
> exactly one of two things is true from 4 to 5. I think it might be
> better to just write out (a && !b) || (b && !a), but there is some
> precedent for the way you did it so perhaps it's fine.

Your formulation takes longer for me to read and understand (by, perhaps, some milliseconds), but checking what C compilers guarantee to store in

bool a = (i == j);
bool b = (k == l);

I found it hard to be sure that some compiler wouldn't do weird things with that. Two "true" values a and b could pass the (a ^ b) test if they represent "true" in two different bit patterns. I don't really think there is a risk here in practice, but looking up the relevant C standards isn't quick for future readers of this code, so I went with your formulation.

> The name prepare_table_commmand() is oddly non-parallel with
> verify_heapam_slot_handler(). Seems better to call it either a table
> throughout, or a heapam throughout. Actually I think I would prefer
> "heap" to either of those, but I definitely think we shouldn't switch
> terminology. Note that prepare_btree_command() doesn't have this
> issue, since it matches verify_btree_slot_handler(). On a related
> note, "c.relam = 2" is really a test for is_heap, not is_table. We
> might have other table AMs in the future, but only one of those AMs
> will be called heap, and only one will have OID 2.

Changed to use "heap" in many places where "table" was used previously, and to use "btree" in many places where "index" was used previously. The term "heapam" now only occurs as part of "verify_heapam", a function defined in contrib/amcheck and not changed here.

> You've got some weird round-tripping stuff where you sent literal
> values to the server so that you can turn around and get them back
> from the server. For example, you've got prepare_table_command()
> select rel->nspname and rel->relname back from the server as literals,
> which seems silly because we have to already have that information or
> we couldn't ask the server to give it to us ... and if we already have
> it, then why do we need to get it again? The reason it's like this
> seems to be that after calling prepare_table_command(), we use
> ParallelSlotSetHandler() to set verify_heapam_slot_handler() as the
> callback, and we set sql.data as the callback, so we don't have access
> to the RelationInfo object when we're handling the slot result. But
> that's easy to fix: just store the sql as a field inside the
> RelationInfo, and then pass a pointer to the whole RelationInfo to the
> slot handler. Then you don't need to round-trip the table and schema
> names; and you have the values available even if an error happens.

Changed. I was doing that mostly so that people examining the server logs would have something more than the oid in the sql to suggest which table or index is being checked.

> On a somewhat related note, I think it might make sense to have the
> slot handlers try to free memory. It seems hard to make pg_amcheck
> leak enough memory to matter, but I guess it's not entirely
> implausible that someone could be checking let's say 10 million
> relations. Freeing the query strings could probably prevent a half a
> GB or so of accumulated memory usage under those circumstances. I
> suppose freeing nspname and relname would save a bit more, but it's
> hardly worth doing since they are a lot shorter and you've got to have
> all that information in memory at once at some point anyway; similarly
> with the RelationInfo structures, which have the further complexity of
> being part of a linked list you might not want to corrupt. But you
> don't need to have every query string in memory at the same time, just
> as many as are running at one in time.

Changed.

> Also, maybe compile_relation_list_one_db() should keep the result set
> around so that you don't need to pstrdup() the nspname and relname in
> the first place. Right now, just before compile_relation_list_one_db()
> calls PQclear() you have two copies of every nspname and relname
> allocated. If you just kept the result sets around forever, the peak
> memory usage would be lower than it is currently. If you really wanted
> to get fancy you could arrange to free each result set when you've
> finished that database, but that seems annoying to code and I'm pretty
> sure it doesn't matter.

Hmm. When compile_relation_list_one_db() is processing the ith database out of N databases, all (nspname,relname) pairs are allocated for databases in [0..i], and additionally the result set for database i is in memory. The result sets for [0..i-1] have already been freed. Keeping around the result sets for all N databases seems more expensive, considering how much stuff is in struct pg_result, if N is large and the relations are spread across the databases rather than clumped together in the last one.

I think your proposal might be a win for some users and a loss for others. Given that it is not a clear win, I don't care to implement it that way, as it takes more effort to remember which object owns which bit of memory.

I have added pfree()s to the handlers to free the nspname and relname when finished. This does little to reduce the peak memory usage, though.

> The CTEs called "include_raw" and "exclude_raw" which are used as part
> of the query to construct a list of tables. The regexes are fished
> through there, and the pattern IDs, which makes sense, but the raw
> patterns are also fished through, and I don't see a reason for that.
> We don't seem to need that for anything. The same seems to apply to
> the query used to resolve database patterns.

Changed.

Both queries are changed to no longer have a "pat" column, and the "id" field (renamed as "pattern_id" for clarity) is used instead.

> I see that most of the queries have now been adjusted to be spread
> across fewer lines, which is good, but please make sure to do that
> everywhere. In particular, I notice that the bt_index_check calls are
> still too spread out.

When running` pg_amcheck --echo`, the queries for a table and index now print as:

SELECT blkno, offnum, attnum, msg FROM "public".verify_heapam(
relation := 33024, on_error_stop := false, check_toast := true, skip := 'none')
SELECT * FROM "public".bt_index_check(index := '33029'::regclass, heapallindexed := false)

Which is two lines per heap table, and just one line per btree index.

> On Thu, Mar 4, 2021 at 12:27 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> More in a bit, need to grab some lunch.
>
> Moving on to the tests, in 003_check.pl, I think it would be slightly
> better if relation_toast were to select ct.oid::regclass and then just
> have the caller use that value directly. We'd certainly want to do
> that if the name could contain any characters that might require
> quoting. Here that's not possible, but I think we might as well use
> the same technique anyway.

Using c.reltoastrelid::regclass, which is basically the same idea.

> I'm not sure how far to go with it, but I think that you might want to
> try to enhance the logging in some of the cases where the TAP tests
> might fail. In particular, if either of these trip in the buildfarm,
> it doesn't seem like it will be too easy to figure out why they
> failed:
>
> + fail('Xid thresholds not as expected');
> + fail('Page layout differs from our expectations');

Ok, I've extended these messages with the extra debugging information. I have also changed them to use 'plan skip_all', since what we are really talking about here is an inability for the test to properly exercise pg_amcheck, not an actual failure of pg_amcheck to function correctly. This should save us some grief if the test isn't portable to all platforms in the build farm, though we'll have to check whether the skip messages are happening on any farm animals.

> You might want to rephrase the message to incorporate the values that
> triggered the failure, e.g. "datfrozenxid $datfrozenxid is not between
> 3 and $relfrozenxid", "expected (a,b) = (12345678,abcdefg) but got
> ($x,$y)", so that if the buildfarm happens to fail there's a shred of
> hope that we might be able to guess the reason from the message.

Added to the skip_all message.

> You
> could also give some thought to whether there are any tests that can
> be improved in similar ways. Test::More is nice in that when you run a
> test with eq() or like() and it fails it will tell you about the input
> values in the diagnostic, but if you do something like is($x < 4, ...)
> instead of cmp_ok($x, '<', 4, ...) then you lose that. I'm not saying
> you're doing that exact thing, just saying that looking through the
> test code with an eye to finding things where you could output a
> little more info about a potential failure might be a worthwhile
> activity.

I'm mostly using command_checks_all and command_fails_like. The main annoyance is that when a pattern fails to match, you get a rather long error message. I'm not sure that it's lacking information, though.

> If it were me, I would get rid of ROWCOUNT and have a list of
> closures, and then loop over the list and call each one e.g. my
> @corruption = ( sub { ... }, sub { ... }, sub { ... }) or maybe
> something like what I did with @scenario in
> src/bin/pg_verifybackup/t/003_corruption.pl, but this is ultimately a
> style preference and I think the way you actually did it is also
> reasonable, and some people might find it more readable than the other
> way.

Unchanged. I think the closure idea is ok, but I am using the ROWCOUNT constant elsewhere (specifically, when inserting rows into the table) and using a constant for this helps keep the number of rows of data and the number of corruptions synchronized.

> The name int4_fickle_ops is positively delightful and I love having a
> test case like this.

I know you know this already, but for others reading this thread, the test using int4_fickle_ops is testing the kind of index corruption that might happen if you changed the sort order underlying an index, such as by updating collation definitions. It was simpler to not muck around with collations in the test itself, but to achieve the sort order breakage this way.

> On the whole, I think these tests look quite solid. I am a little
> concerned, as you may gather from the comment above, that they will
> not survive contact with the buildfarm, because they will turn out to
> be platform or OS-dependent in some way. However, I can see that
> you've taken steps to avoid such dependencies, and maybe we'll be
> lucky and those will work. Also, while I am suspicious something's
> going to break, I don't know what it's going to be, so I can't suggest
> any method to avoid it. I think we'll just have to keep an eye on the
> buildfarm post-commit and see what crops up.

As I mentioned above, I've changed some failures to 'plan skip_all => reason', so that the build farm won't break if the tests aren't portable in ways I'm already thinking about. We'll just see if it breaks for additional ways that I'm not thinking about.

> Turning to the documentation, I see that it is documented that a bare
> command-line argument can be a connection string rather than a
> database name. That sounds like a good plan, but when I try
> 'pg_amcheck sslmode=require' it does not work: FATAL: database
> "sslmode=require" does not exist. The argument to -e is also
> documented to be a connection string, but that also seems not to work.
> Some thought might need to be given to what exactly these connection
> opens are supposed to mean. Like, do the connection options I set via
> -e apply to all the connections I make, or just the one to the
> maintenance database? How do I set connection options for connections
> to databases whose names aren't specified explicitly but are
> discovered by querying pg_database? Maybe instead of allowing these to
> be a connection string, we should have a separate option that can be
> used just for the purpose of setting connection options that then
> apply to all connections. That seems a little bit oddly unlike other
> tools, but if I want sslmode=verify-ca or something on all my
> connections, there should be an easy way to get it.

I'm not sure where you are getting the '-e' from. That is the short form of --echo, and not what you are likely to want. However, your larger point is valid.

I don't like the idea that pg_amcheck would handle these options in a way that is incompatible with reindexdb or vacuumdb. I think pg_amcheck can have a superset of those tools' options, but it should not have options that are incompatible with those tools' options. That way, if the extra options that pg_amcheck offers become popular, we can add support for them in those other tools. But if the options are incompatible, we'd not be able to do that without breaking backward compatibility of those tools' interfaces, which we wouldn't want to do.

As such, I have solved the problem by reducing the number of dbname arguments you can provide on the command-line to just one. (This does not limit the number of database *patterns* that you can supply.) Those tools only allow one dbname on the command line, so this is not a regression of functionality from what those tools offer. Only the single dbname argument, or single maintenance-db argument, can be a connection string. The database patterns do not support that, nor would it make sense for them to do so.

All of the following should now work:

pg_amcheck --all "port=5555 sslmode=require"

pg_amcheck --maintenance-db="host=myhost port=5555 dbname=mydb sslmode=require" --all

pg_amcheck -d foo -d bar -d baz mydb

pg_amcheck -d foo -d bar -d baz "host=myhost dbname=mydb"

Note that using --all with a connection string is a pg_amcheck extension. It doesn't currently work in reindexdb, which complains.

There is a strange case, `pg_amcheck --maintenance-db="port=5555 dbname=postgres" "port=5432 dbname=regression"`, which doesn't complain, despite there being nothing listening on port 5555. This is because pg_amcheck completely ignores the maintenance-db argument in this instance, but I have not changed this behavior, because reindexdb does the same thing.

> The documentation makes many references to patterns, but does not
> explain what a pattern is. I see that psql's documentation contains an
> explanation, and pg_dump's documentation links to psql's
> documentation. pg_amcheck should probably link to psql's
> documentation, too.

A prior version of this patch had a reference to that, but no more. Thanks for noticing. I've put it back in. There is some tension here between the desire to keep the docs concise and the desire to explain things better with examples, etc. I'm not sure I've got that balance right, but I'm too close to the project to be the right person to make that call. Does it seem ok?

> In the documentation for -d, you say that "If -a --all is also
> specified, -d --database does not additionally affect which databases
> are checked." I suggest replacing "does not additionally affect which
> databases are checked" with "has no effect."

Changed.

> In two places you say "without regard for" but I think it should be
> "without regard to".

Changed.

> In the documentation for --no-strict-names you use "nor" where I think
> it should say "or".

Changed.

> I kind of wonder whether we need --quiet. It seems like right now it
> only does two things. One is to control complaints about ignoring the
> startblock and endblock options, but I don't agree with that behavior
> anyway. The other is control whether we complain about unmatched
> patterns, but I think that could just be controlled --no-strict-names
> i.e. normally an unmatched pattern results in a complaint and a
> failure, but with --no-strict-names there is neither a complaint nor a
> failure. Having a flag to control whether we get the message
> separately from whether we get the failure doesn't seem helpful.

Hmm. I think that having --quiet plus --no-strict-names suppress the warnings about unmatched patterns has some value.

Also, as discussed above, I also now decrease the PGVerbosity to PQERRORS_TERSE, which has additional value, I think.

But I don't feel strongly about this, and if you'd rather --quiet be removed, that's fine, too. But I'll wait to hear back about that.

> I don't think it's good to say "This is an alias for" in the
> documentation of -i -I -t -T. I suggest instead saying "This is
> similar to".

Changed.

> Instead of "Option BLAH takes precedence over..." I suggest "The BLAH
> option takes precedence over..."

Changed.

Attachment Content-Type Size
v44-0001-Reworking-ParallelSlots-for-mutliple-DB-use.patch application/octet-stream 24.0 KB
v44-0002-Adding-contrib-module-pg_amcheck.patch application/octet-stream 143.9 KB
v44-0003-Extending-PostgresNode-to-test-corruption.patch application/octet-stream 17.0 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Steele 2021-03-10 16:48:05 Re: [PATCH] Covering SPGiST index
Previous Message Amit Langote 2021-03-10 15:44:53 Re: Huge memory consumption on partitioned table with FKs