Skip site navigation (1) Skip section navigation (2)

Re: Long update progress

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Andy Marden" <amarden(at)usa(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org
Subject: Re: Long update progress
Date: 2002-07-19 14:04:36
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-adminpgsql-general
"Andy Marden" <amarden(at)usa(dot)net> writes:
> We have an database batch update process running. It runs normally and takes
> around 6 hours. This is dealing with a much larger data set after an error
> correction. It's been running for 6 days now and people are getting twitchy
> that it might not finish. Is there any way (accepting that more preparation
> would, in retrospect, have been better) to tell how far we're got. This
> iterates round a cursor and updates individual rows. The trouble is it
> commits once at the end.

> The ideal would be to find a way of doing a dirty read against the table
> that is bing updated. Then we'd know how many rows had been processed.

A quick and dirty answer is just to watch the physical file for the
table being updated, and see how fast it's growing.

If you're using 7.2 then the contrib/pgstattuple function would let you
get more accurate info (note it will count not-yet-committed tuples as
"dead", which is a tad misleading, but at least it counts 'em).

			regards, tom lane

In response to

pgsql-admin by date

Next:From: Grzegorz Prze┼║dzieckiDate: 2002-07-19 16:28:20
Subject: PGDATA, PGDATA2, pg_database
Previous:From: Roger MathisDate: 2002-07-19 13:59:58
Subject: Object oriented functions

pgsql-general by date

Next:From: Andrew SullivanDate: 2002-07-19 14:05:20
Subject: Re: References for PostgreSQL
Previous:From: Tom LaneDate: 2002-07-19 13:53:26
Subject: Re: sequence scan, but indexed tables

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group