From: | Zeugswetter Andreas SB <ZeugswetterA(at)wien(dot)spardat(dot)at> |
---|---|
To: | "'Tom Lane'" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Hiroshi Inoue <Inoue(at)tpf(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org |
Subject: | AW: AW: AW: Plans for solving the VACUUM problem |
Date: | 2001-05-18 15:28:28 |
Message-ID: | 11C1E6749A55D411A9670001FA6879633682D6@sdexcsrv1.f000.d0188.sd.spardat.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> There was some discussion of doing that, but it fell down on the little
> problem that in normal index-search cases you *don't* know the heap tid
> you are looking for.
I can not follow here. It does not matter if you don't know a trailing
part of the key when doing a btree search, it only helps to directly find the
leaf page.
>
> > And in above case, the keys (since identical except xtid) will stick close
> > together, thus caching will be good.
>
> Even without key-collision problems, deleting N tuples out of a total of
> M index entries will require search costs like this:
>
> bulk delete in linear scan way:
>
> O(M) I/O costs (read all the pages)
> O(M log N) CPU costs (lookup each TID in sorted list)
>
> successive index probe way:
>
> O(N log M) I/O costs for probing index
> O(N log M) CPU costs for probing index (key comparisons)
both O(N log (levels in index))
> So, as I said to Hiroshi, this alternative looks to me like a possible
> future refinement, not something we need to do in the first version.
Yes, I thought it might be easier to implement than the index scan :-)
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2001-05-18 15:45:46 | Re: Plans for solving the VACUUM problem |
Previous Message | Fernando Cabrera | 2001-05-18 15:25:50 |