Re: Streaming I/O, vectored I/O (WIP)

From: Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Andres Freund <andres(at)anarazel(dot)de>
Subject: Re: Streaming I/O, vectored I/O (WIP)
Date: 2024-02-07 10:54:26
Message-ID: CAN55FZ1eqgHKsL29qkhe7mau4nTfzjxVEbET32tTJL0zGtyEVw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

Thanks for working on this!

On Wed, 10 Jan 2024 at 07:14, Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:

> Thanks! I committed the patches up as far as smgr.c before the
> holidays. The next thing to commit would be the bufmgr.c one for
> vectored ReadBuffer(), v5-0001. Here's my response to your review of
> that, which triggered quite a few changes.
>
> See also new version of the streaming_read.c patch, with change list
> at end. (I'll talk about v5-0002, the SMgrRelation lifetime one, over
> on the separate thread about that where Heikki posted a better
> version.)

I have a couple of comments / questions.

0001-Provide-vectored-variant-of-ReadBuffer:

- Do we need to pass the hit variable to ReadBuffer_common()? I think
it can be just declared in the ReadBuffer_common() now.

0003-Provide-API-for-streaming-reads-of-relations:

- Do we need to re-think about getting a victim buffer logic?
StrategyGetBuffer() function errors if it can not find any unpinned
buffers, this can be more common in the async world since we pin
buffers before completing the read (while looking ahead).

- If the returned block from the callback is an invalid block,
pg_streaming_read_look_ahead() sets pgsr->finished = true. Could there
be cases like the returned block being an invalid block but we should
continue to read after this invalid block?

- max_pinned_buffers and pinned_buffers_trigger variables are set in
the initialization part (in the
pg_streaming_read_buffer_alloc_internal() function) then they do not
change. In some cases there could be no acquirable buffers to pin
while initializing the pgsr (LimitAdditionalPins() set
max_pinned_buffers to 1) but while the read is continuing there could
be chances to create larger reads (other consecutive reads are
finished while this read is continuing). Do you think that trying to
reset max_pinned_buffers and pinned_buffers_trigger to have higher
values after the initialization to have larger reads make sense?

+ /* Is there a head range that we can't extend? */
+ head_range = &pgsr->ranges[pgsr->head];
+ if (head_range->nblocks > 0 &&
+ (!need_complete ||
+ !head_range->need_complete ||
+ head_range->blocknum + head_range->nblocks != blocknum))
+ {
+ /* Yes, time to start building a new one. */
+ head_range = pg_streaming_read_new_range(pgsr);

- I think if both need_complete and head_range->need_complete are
false, we can extend the head range regardless of the consecutiveness
of the blocks.

0006-Allow-streaming-reads-to-ramp-up-in-size:

- ramp_up_pin_limit variable is declared as an int but we do not check
the overflow while doubling it. This could be a problem in longer
reads.

--
Regards,
Nazir Bilal Yavuz
Microsoft

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message vignesh C 2024-02-07 10:57:10 Re: Why is subscription/t/031_column_list.pl failing so much?
Previous Message Ashutosh Bapat 2024-02-07 10:39:47 Re: pgsql: Add EXPLAIN (MEMORY) to report planner memory consumption