Re: Merge algorithms for large numbers of "tapes"

From: Greg Stark <gsstark(at)mit(dot)edu>
To: "Jonah H(dot) Harris" <jonah(dot)harris(at)gmail(dot)com>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Simon Riggs" <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Merge algorithms for large numbers of "tapes"
Date: 2006-03-08 04:48:55
Message-ID: 8764mpo820.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Jonah H. Harris" <jonah(dot)harris(at)gmail(dot)com> writes:

> On 3/7/06, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >
> > However, now that we've changed the code to prefer large numbers of tapes,
> > it's not at all clear that Algorithm D is still the right one to use. In
> > particular I'm looking at cascade merge, Algorithm 5.4.3C, which appears
> > to use significantly fewer passes when T is large. Do you want to try
> > that?
>
> Guess we won't really know 'til it can be tested :)

It would also be interesting to allow multiple temporary areas to be declared
and try to spread tape files across the temporary areas. Ideally keeping input
and output tapes on separate drives.

--
greg

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2006-03-08 05:29:31 Re: PostgreSQL Anniversary Summit, Call for Contributions
Previous Message Mark Kirkwood 2006-03-08 04:47:50 Re: pg_freespacemap question