Greg Stark <gsstark(at)mit(dot)edu> writes:
> It's orthogonal. My point was that I have a bigger problem, but even if I
> address it by switching away from plpgsql, or I guess by using EXECUTE, I
> would still have a problem. I didn't realize you could run analyze in a
> transaction, but even being able to I wouldn't really want to have to do that
> repeatedly during the job.
Why not? Given the sampling behavior that's been in there for a release
or two, ANALYZE is pretty cheap on large tables; certainly much cheaper
than any processing you might be doing that's going to grovel over the
> Except that the first thing the job does is delete all the old records. This
> is inside a transaction. So an estimate based on the heap size would be off by
> a factor of two by the time the job is done.
Could you use TRUNCATE? I dunno if locking the table is okay for you.
It is transaction safe though.
> With analyze in a transaction I'm not clear what the semantics should be
> though. I suppose it should only count tuples visible to the transaction
It currently uses SnapshotNow, so would see committed tuples of other
transactions plus uncommitted ones of the present transaction. This is
not exactly the same thing as the transaction's snapshot, but close.
> A sudden degradation is much more dangerous. Even if it's rare, a sudden
> degradation means an outage in prime time.
[ shrug ] You can get a sudden degradation with fixed plans, too. All
it takes is an addition of a lot of rows in some table that had been
regards, tom lane
In response to
pgsql-general by date
|Next:||From: Vivek Khera||Date: 2004-08-27 19:58:18|
|Subject: Re: Deadlocks caused by referential integrity checks|
|Previous:||From: Vivek Khera||Date: 2004-08-27 19:43:25|
|Subject: Re: How is this possible? (more on deadlocks)|