|From:||Fabien COELHO <fabien(dot)coelho(at)mines-paristech(dot)fr>|
|To:||Michael Banck <michael(dot)banck(at)credativ(dot)de>|
|Cc:||Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Bernd Helmle <mailings(at)oopsware(dot)de>, Michael Paquier <michael(at)paquier(dot)xyz>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: Progress reporting for pg_verify_checksums|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
> New patch attached.
Patch applies cleanly. Compiles, "make check" ok. doc build is also ok.
There are no tests, which is probably fine for such an interactive
Docs look okay to me. Clear and to the point.
total_percent = total_size ? (int64) ((current_size / MEGABYTES) * 100 / (total_size / MEGABYTES)) : 0;
MEGABYTES can be simplified and will enhance precision. ISTM that the
percent could be a double:
total_percent = total_size ? 100.0 * current_size / total_size : 0.0 ;
and then just let fprintf do the rounding.
I would bother rounding down < 100% to 100, because then you would get
1560/1492 MB (100\%, X MB/s)
which is kind of silly. If it goes over it just mean that the initial
estimate was wrong, which might happen if the db is running (other patch
in the queue). I'm fine with over 100 if it is consistent with the MB
ratio next to it.
fprintf could be called once instead of twice by merging the eol and the
Maybe I'd consider inverting the sizeonly boolean so that more is done
I'd still would use more precise than one second precision time so that
the speed displayed at the beginning is more realistic. eg I got on a
short test with probably in system cache files:
188/188 MB (100%, 188 MB/s)
but the reality is that it took 0.126 seconds, and the speed was really
|Next Message||Tom Lane||2019-02-19 15:45:07||Re: Allowing extensions to find out the OIDs of their member objects|
|Previous Message||Fabien COELHO||2019-02-19 15:14:06||Re: CPU costs of random_zipfian in pgbench|