Re: Parallel tuplesort, partitioning, merging, and the future

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: Peter Geoghegan <pg(at)heroku(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Subject: Re: Parallel tuplesort, partitioning, merging, and the future
Date: 2016-08-10 19:08:08
Message-ID: CAGTBQpZbv_3mNgxsKtrRk_xCUc5Yj-b=S0XvRVX9pxeCANB_kg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Aug 8, 2016 at 4:44 PM, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
> The basic idea I have in mind is that we create runs in workers in the
> same way that the parallel CREATE INDEX patch does (one output run per
> worker). However, rather than merging in the leader, we use a
> splitting algorithm to determine partition boundaries on-the-fly. The
> logical tape stuff then does a series of binary searches to find those
> exact split points within each worker's "final" tape. Each worker
> reports the boundary points of its original materialized output run in
> shared memory. Then, the leader instructs workers to "redistribute"
> slices of their final runs among each other, by changing the tapeset
> metadata to reflect that each worker has nworker input tapes with
> redrawn offsets into a unified BufFile. Workers immediately begin
> their own private on-the-fly merges.

I think it's a great design, but for that, per-worker final tapes have
to always be random-access.

I'm not hugely familiar with the code, but IIUC there's some penalty
to making them random-access right?

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2016-08-10 19:10:32 Re: Set log_line_prefix and application name in test drivers
Previous Message Robert Haas 2016-08-10 18:59:00 Re: Parallel tuplesort, partitioning, merging, and the future