Re: Merge algorithms for large numbers of "tapes"

From: Florian Weimer <fw(at)deneb(dot)enyo(dot)de>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: "Luke Lonergan" <llonergan(at)greenplum(dot)com>, "Dann Corbit" <DCorbit(at)connx(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, "Simon Riggs" <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Merge algorithms for large numbers of "tapes"
Date: 2006-03-09 07:20:48
Message-ID: 87bqwgf5in.fsf@mid.deneb.enyo.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

* Greg Stark:

> That's one thing that gives me pause about the current approach of
> using more tapes. It seems like ideally the user would create a
> temporary work space on each spindle and the database would arrange
> to use no more than that number of tapes. Then each merge operation
> would involve only sequential access for both reads and writes.

And you'd need to preallocate the files in some way or other, to avoid
file system fragmentation.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Stefan Kaltenbrunner 2006-03-09 07:34:34 Re: problem with large maintenance_work_mem settings and
Previous Message ITAGAKI Takahiro 2006-03-09 06:53:28 Re: [HACKERS] Automatic free space map filling