Ühel kenal päeval (pühapäev, 23. jaanuar 2005, 15:40-0500), kirjutas Tom
> Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> > Changing the idea slightly might be better: if a row update would cause
> > a block split, then if there is more than one row version then we vacuum
> > the whole block first, then re-attempt the update.
> "Block split"? I think you are confusing tables with indexes.
> Chasing down prior versions of the same row is not very practical
> anyway, since there is no direct way to find them.
> One possibility is, if you tried to insert a row on a given page but
> there's not room, to look through the other rows on the same page to see
> if any are deletable (xmax below the GlobalXmin event horizon). This
> strikes me as a fairly expensive operation though, especially when you
> take into account the need to get rid of their index entries first.
Why is removing index entries essential ?
In pg yuo always have to visit data page, so finding the wrong tuple
there could just produce the same result as deleted tuple (which in this
case it actually is). The cleaning of index entries could be left to the
Hannu Krosing <hannu(at)tm(dot)ee>
In response to
pgsql-performance by date
|Next:||From: Tatsuo Ishii||Date: 2005-01-24 02:52:44|
|Subject: Re: PostgreSQL clustering VS MySQL clustering |
|Previous:||From: Tatsuo Ishii||Date: 2005-01-24 01:30:33|
|Subject: Re: PostgreSQL clustering VS MySQL clustering|