"Andy Marden" <amarden(at)usa(dot)net> writes:
> We have an database batch update process running. It runs normally and takes
> around 6 hours. This is dealing with a much larger data set after an error
> correction. It's been running for 6 days now and people are getting twitchy
> that it might not finish. Is there any way (accepting that more preparation
> would, in retrospect, have been better) to tell how far we're got. This
> iterates round a cursor and updates individual rows. The trouble is it
> commits once at the end.
> The ideal would be to find a way of doing a dirty read against the table
> that is bing updated. Then we'd know how many rows had been processed.
A quick and dirty answer is just to watch the physical file for the
table being updated, and see how fast it's growing.
If you're using 7.2 then the contrib/pgstattuple function would let you
get more accurate info (note it will count not-yet-committed tuples as
"dead", which is a tad misleading, but at least it counts 'em).
regards, tom lane
In response to
pgsql-admin by date
|Next:||From: Grzegorz Przeździecki||Date: 2002-07-19 16:28:20|
|Subject: PGDATA, PGDATA2, pg_database|
|Previous:||From: Roger Mathis||Date: 2002-07-19 13:59:58|
|Subject: Object oriented functions|
pgsql-general by date
|Next:||From: Andrew Sullivan||Date: 2002-07-19 14:05:20|
|Subject: Re: References for PostgreSQL|
|Previous:||From: Tom Lane||Date: 2002-07-19 13:53:26|
|Subject: Re: sequence scan, but indexed tables |