Merge algorithms for large numbers of "tapes"

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Merge algorithms for large numbers of "tapes"
Date: 2006-03-07 23:14:54
Message-ID: 6293.1141773294@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

BTW, I was just looking over Knuth's discussion of sorting again, and
realized that there is still something more that could be done within
the existing sort framework. We currently use standard polyphase merge
(his Algorithm 5.4.2D), which IIRC I chose because it was simple and
for relatively small numbers of tapes T it was about as good as anything
else. Knuth spends a great deal of energy on minimizing tape rewind
time which of course is of no interest to us, and I had supposed that
all of his more-complex algorithms were really only of interest if you
needed to consider rewind time. However, now that we've changed the
code to prefer large numbers of tapes, it's not at all clear that
Algorithm D is still the right one to use. In particular I'm looking at
cascade merge, Algorithm 5.4.3C, which appears to use significantly
fewer passes when T is large. Do you want to try that?

regards, tom lane

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Mark Kirkwood 2006-03-07 23:24:22 Re: pg_freespacemap question
Previous Message Tom Lane 2006-03-07 22:39:18 Re: Coverity Open Source Defect Scan of PostgreSQL