Re: Perform streaming logical transactions by background workers and parallel apply

From: Peter Smith <smithpb2250(at)gmail(dot)com>
To: "houzj(dot)fnst(at)fujitsu(dot)com" <houzj(dot)fnst(at)fujitsu(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, "wangw(dot)fnst(at)fujitsu(dot)com" <wangw(dot)fnst(at)fujitsu(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, "shiy(dot)fnst(at)fujitsu(dot)com" <shiy(dot)fnst(at)fujitsu(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Perform streaming logical transactions by background workers and parallel apply
Date: 2022-09-09 07:02:16
Message-ID: CAHut+Ps2+Ga3uExjm3jNcvSiUBS_FR05h8k31JdtDZXv7fGPPw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Here are my review comments for the v28-0001 patch:

(There may be some overlap with other people's review comments and/or
some fixes already made).

======

1. Commit Message

In addition, the patch extends the logical replication STREAM_ABORT message so
that abort_time and abort_lsn can also be sent which can be used to update the
replication origin in parallel apply worker when the streaming transaction is
aborted.

~

Should this also mention that because this message extension is needed
to support parallel streaming, meaning that parallel streaming is not
supported for publications on servers < PG16?

======

2. doc/src/sgml/config.sgml

<para>
Specifies maximum number of logical replication workers. This includes
- both apply workers and table synchronization workers.
+ apply leader workers, parallel apply workers, and table synchronization
+ workers.
</para>
"apply leader workers" -> "leader apply workers"

~~~

3.

max_logical_replication_workers (integer)
Specifies maximum number of logical replication workers. This
includes apply leader workers, parallel apply workers, and table
synchronization workers.
Logical replication workers are taken from the pool defined by
max_worker_processes.
The default value is 4. This parameter can only be set at server start.

~

I did not really understand why the default is 4. Because the default
tablesync workers is 2, and the default parallel workers is 2, but
what about accounting for the apply worker? Therefore, shouldn't
max_logical_replication_workers default be 5 instead of 4?

======

4. src/backend/commands/subscriptioncmds.c - defGetStreamingMode

+ }
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("%s requires a Boolean value or \"parallel\"",
+ def->defname)));
+ return SUBSTREAM_OFF; /* keep compiler quiet */
+}

Some whitespace before the ereport and the return might be tidier.

======

5. src/backend/libpq/pqmq.c

+ {
+ if (IsParallelWorker())
+ SendProcSignal(pq_mq_parallel_leader_pid,
+ PROCSIG_PARALLEL_MESSAGE,
+ pq_mq_parallel_leader_backend_id);
+ else
+ {
+ Assert(IsLogicalParallelApplyWorker());
+ SendProcSignal(pq_mq_parallel_leader_pid,
+ PROCSIG_PARALLEL_APPLY_MESSAGE,
+ pq_mq_parallel_leader_backend_id);
+ }
+ }

This code can be simplified if you want to. For example,

{
ProcSignalReason reason;
Assert(IsParallelWorker() || IsLogicalParallelApplyWorker());
reason = IsParallelWorker() ? PROCSIG_PARALLEL_MESSAGE :
PROCSIG_PARALLEL_APPLY_MESSAGE;
SendProcSignal(pq_mq_parallel_leader_pid, reason,
pq_mq_parallel_leader_backend_id);
}

======

6. src/backend/replication/logical/applyparallelworker.c

Is there a reason why this file is called applyparallelworker.c
instead of parallelapplyworker.c? Now this name is out of step with
names of all the new typedefs etc.

~~~

7.

+/*
+ * There are three fields in each message received by parallel apply worker:
+ * start_lsn, end_lsn and send_time. Because we have updated these statistics
+ * in leader apply worker, we could ignore these fields in parallel apply
+ * worker (see function LogicalRepApplyLoop).
+ */
+#define SIZE_STATS_MESSAGE (2 * sizeof(XLogRecPtr) + sizeof(TimestampTz))

SUGGESTION (Just dded word "the" and change "could" -> "can")
There are three fields in each message received by the parallel apply
worker: start_lsn, end_lsn and send_time. Because we have updated
these statistics in the leader apply worker, we can ignore these
fields in the parallel apply worker (see function
LogicalRepApplyLoop).

~~~

8.

+/*
+ * List that stores the information of parallel apply workers that were
+ * started. Newly added worker information will be removed from the list at the
+ * end of the transaction when there are enough workers in the pool. Besides,
+ * exited workers will be removed from the list after being detected.
+ */
+static List *ParallelApplyWorkersList = NIL;

Perhaps this comment can give more explanation of what is meant by the
part that says "when there are enough workers in the pool".

~~~

9. src/backend/replication/logical/applyparallelworker.c -
parallel_apply_can_start

+ /*
+ * Don't start a new parallel worker if not in streaming parallel mode.
+ */
+ if (MySubscription->stream != SUBSTREAM_PARALLEL)
+ return false;

"streaming parallel mode." -> "parallel streaming mode."

~~~

10.

+ /*
+ * For streaming transactions that are being applied using parallel apply
+ * worker, we cannot decide whether to apply the change for a relation that
+ * is not in the READY state (see should_apply_changes_for_rel) as we won't
+ * know remote_final_lsn by that time. So, we don't start the new parallel
+ * apply worker in this case.
+ */
+ if (!AllTablesyncsReady())
+ return false;

"using parallel apply worker" -> "using a parallel apply worker"

~~~

11.

+ /*
+ * Do not allow parallel apply worker to be started in the parallel apply
+ * worker.
+ */
+ if (am_parallel_apply_worker())
+ return false;

I guess the comment is valid but it sounds strange.

SUGGESTION
Only leader apply workers can start parallel apply workers.

~~~

12.

+ if (am_parallel_apply_worker())
+ return false;

Maybe this code should be earlier in this function, because surely
this is a less costly test than the test for !AllTablesyncsReady()?

~~~

13. src/backend/replication/logical/applyparallelworker.c -
parallel_apply_start_worker

+/*
+ * Start a parallel apply worker that will be used for the specified xid.
+ *
+ * If a parallel apply worker is not in use then re-use it, otherwise start a
+ * fresh one. Cache the worker information in ParallelApplyWorkersHash keyed by
+ * the specified xid.
+ */

"is not in use" -> "is found but not in use" ?

~~~

14.

+ /* Failed to start a new parallel apply worker. */
+ if (winfo == NULL)
+ return;

There seem to be quite a lot of places (like this example) where
something may go wrong and the behaviour apparently will just silently
fall-back to using the non-parallel streaming. Maybe that is OK, but I
am just wondering how can the user ever know this has happened? Maybe
the docs can mention that this could happen and give some description
of what processes users can look for (or some other strategy) so they
can just confirm that the parallel streaming is really working like
they assume it to be?

~~~

15.

+ * Set this flag in the leader instead of the parallel apply worker to
+ * avoid the race condition where the leader has already started waiting
+ * for the parallel apply worker to finish processing the transaction(set
+ * the in_parallel_apply_xact to false) while the child process has not yet
+ * processed the first STREAM_START and has not set the
+ * in_parallel_apply_xact to true.

Missing whitespace before "("

~~~

16. src/backend/replication/logical/applyparallelworker.c -
parallel_apply_find_worker

+ /* Return the cached parallel apply worker if valid. */
+ if (stream_apply_worker != NULL)
+ return stream_apply_worker;

Perhaps 'cur_stream_parallel_apply_winfo' is a better name for this var?

~~~

17. src/backend/replication/logical/applyparallelworker.c -
parallel_apply_free_worker

+/*
+ * Remove the parallel apply worker entry from the hash table. And stop the
+ * worker if there are enough workers in the pool.
+ */
+void
+parallel_apply_free_worker(ParallelApplyWorkerInfo *winfo, TransactionId xid)

I think the reason for doing the "enough workers in the pool" logic
needs some more explanation.

~~~

18.

+ if (napplyworkers > (max_parallel_apply_workers_per_subscription / 2))
+ {
+ logicalrep_worker_stop_by_slot(winfo->shared->logicalrep_worker_slot_no,
+ winfo->shared->logicalrep_worker_generation);
+
+ ParallelApplyWorkersList = list_delete_ptr(ParallelApplyWorkersList, winfo);
+
+ shm_mq_detach(winfo->mq_handle);
+ shm_mq_detach(winfo->error_mq_handle);
+ dsm_detach(winfo->dsm_seg);
+ pfree(winfo);
+ }
+ else
+ winfo->in_use = false;

Maybe it is easier to remove this "else" and just unconditionally set
winfo->in_use = false BEFORE the check to free the entire winfo.

~~~

19. src/backend/replication/logical/applyparallelworker.c -
LogicalParallelApplyLoop

+ ApplyMessageContext = AllocSetContextCreate(ApplyContext,
+ "ApplyMessageContext",
+ ALLOCSET_DEFAULT_SIZES);

Should the name of this context be "ParallelApplyMessageContext"?

~~~

20. src/backend/replication/logical/applyparallelworker.c -
HandleParallelApplyMessage

+ default:
+ {
+ elog(ERROR, "unrecognized message type received from parallel apply
worker: %c (message length %d bytes)",
+ msgtype, msg->len);
+ }

"received from" -> "received by"

~~~

21. src/backend/replication/logical/applyparallelworker.c -
HandleParallelApplyMessages

+/*
+ * Handle any queued protocol messages received from parallel apply workers.
+ */
+void
+HandleParallelApplyMessages(void)

21a.
"received from" -> "received by"

~

21b.
I wonder if this comment should give some credit to the function in
parallel.c - because this seems almost a copy of all that code.

~~~

22. src/backend/replication/logical/applyparallelworker.c -
parallel_apply_set_xact_finish

+/*
+ * Set the in_parallel_apply_xact flag for the current parallel apply worker.
+ */
+void
+parallel_apply_set_xact_finish(void)

Should that "Set" really be saying "Reset" or "Clear"?

======

23. src/backend/replication/logical/launcher.c - logicalrep_worker_launch

+ nparallelapplyworkers = logicalrep_parallel_apply_worker_count(subid);
+
+ /*
+ * Return silently if the number of parallel apply workers reached the
+ * limit per subscription.
+ */
+ if (is_subworker && nparallelapplyworkers >=
max_parallel_apply_workers_per_subscription)
+ {
+ LWLockRelease(LogicalRepWorkerLock);
+ return false;
}
I’m not sure if this is a good idea to be so silent. How will the user
know if they should increase the GUC parameter or not if it never
tells them that the value is too low?

~~~

24.

/* Now wait until it attaches. */
- WaitForReplicationWorkerAttach(worker, generation, bgw_handle);
+ return WaitForReplicationWorkerAttach(worker, generation, bgw_handle);

The comment feels a tiny bit misleading, because there is a chance
that this might not attach at all and return false if something goes
wrong.

~~~

25. src/backend/replication/logical/launcher.c - logicalrep_worker_stop

+void
+logicalrep_worker_stop_by_slot(int slot_no, uint16 generation)
+{
+ LogicalRepWorker *worker = &LogicalRepCtx->workers[slot_no];
+
+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+ /* Return if the generation doesn't match or the worker is not alive. */
+ if (worker->generation != generation ||
+ worker->proc == NULL)
+ return;
+
+ logicalrep_worker_stop_internal(worker);
+
+ LWLockRelease(LogicalRepWorkerLock);
+}

I think this condition should be changed and reversed, otherwise you
might return before releasing the lock (??)

SUGGESTION

{
LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);

/* Stop only if the worker is alive and the generation matches. */
if (worker && worker->proc && worker->generation == generation)
logicalrep_worker_stop_internal(worker);

LWLockRelease(LogicalRepWorkerLock);
}

~~~

26 src/backend/replication/logical/launcher.c - logicalrep_worker_stop_internal

+/*
+ * Workhorse for logicalrep_worker_stop() and logicalrep_worker_detach(). Stop
+ * the worker and wait for it to die.
+ */

... and logicalrep_worker_stop_by_slot()

~~~

27. src/backend/replication/logical/launcher.c - logicalrep_worker_detach

+ /*
+ * This is the leader apply worker; stop all the parallel apply workers
+ * previously started from here.
+ */
+ if (!isParallelApplyWorker(MyLogicalRepWorker))

27a.
The comment does not match the code. If this *is* the leader apply
worker then why do we have the condition to check that?

Maybe only needs a comment update like

SUGGESTION
If this is the leader apply worker then stop all the parallel...

~

27b.
Code seems also assuming it cannot be a tablesync worker but it is not
checking that. I am wondering if it will be better to have yet another
macro/inline to do isLeaderApplyWorker() that will make sure this
really is the leader apply worker. (This review comment suggestion is
repeated later below).

======

28. src/backend/replication/logical/worker.c - STREAMED TRANSACTIONS comment

+ * If no worker is available to handle the streamed transaction, the data is
+ * written to temporary files and then applied at once when the final commit
+ * arrives.

SUGGESTION
If streaming = true, or if streaming = parallel but there are not
parallel apply workers available to handle the streamed transaction,
the data is written to...

~~~

29. src/backend/replication/logical/worker.c - TransactionApplyAction

/*
* What action to take for the transaction.
*
* TA_APPLY_IN_LEADER_WORKER means that we are in the leader apply worker and
* changes of the transaction are applied directly in the worker.
*
* TA_SERIALIZE_TO_FILE means that we are in leader apply worker and changes
* are written to temporary files and then applied when the final commit
* arrives.
*
* TA_APPLY_IN_PARALLEL_WORKER means that we are in the parallel apply worker
* and changes of the transaction are applied directly in the worker.
*
* TA_SEND_TO_PARALLEL_WORKER means that we are in the leader apply worker and
* need to send the changes to the parallel apply worker.
*/
typedef enum
{
/* The action for non-streaming transactions. */
TA_APPLY_IN_LEADER_WORKER,

/* Actions for streaming transactions. */
TA_SERIALIZE_TO_FILE,
TA_APPLY_IN_PARALLEL_WORKER,
TA_SEND_TO_PARALLEL_WORKER
} TransactionApplyAction;

~

29a.
I think if you change all those enum names slightly (e.g. like below)
then they can be more self-explanatory:

TA_NOT_STREAMING_LEADER_APPLY
TA_STREAMING_LEADER_SERIALIZE
TA_STREAMING_LEADER_SEND_TO_PARALLEL
TA_STREAMING_PARALLEL_APPLY

~

29b.
* TA_APPLY_IN_LEADER_WORKER means that we are in the leader apply worker and
* changes of the transaction are applied directly in the worker.

Maybe that should mention this is for the non-streaming case, or if
you change all the enums names like in 29a. then there is no need
because it is more self-explanatory.

~~~

30. src/backend/replication/logical/worker.c - should_apply_changes_for_rel

* Note that for streaming transactions that are being applied in parallel
+ * apply worker, we disallow applying changes on a table that is not in
+ * the READY state, because we cannot decide whether to apply the change as we
+ * won't know remote_final_lsn by that time.

"applied in parallel apply worker" -> "applied in the parallel apply worker"

~~~

31.

+ errdetail("Cannot handle streamed replication transaction by parallel "
+ "apply workers until all tables are synchronized.")));

"by parallel apply workers" -> "using parallel apply workers" (?)

~~~

32. src/backend/replication/logical/worker.c - handle_streamed_transaction

Now that there is an apply_action enum I felt it is better for this
code to be using a switch instead of all the if/else. Furthermore, it
might be better to put the switch case in a logical order (e.g. same
as the suggested enums value order of #29a).

~~~

33. src/backend/replication/logical/worker.c - apply_handle_stream_prepare

(same as comment #32)

Now that there is an apply_action enum I felt it is better for this
code to be using a switch instead of all the if/else. Furthermore, it
might be better to put the switch case in a logical order (e.g. same
as the suggested enums value order of #29a).

~~~

34. src/backend/replication/logical/worker.c - apply_handle_stream_start

(same as comment #32)

Now that there is an apply_action enum I felt it is better for this
code to be using a switch instead of all the if/else. Furthermore, it
might be better to put the switch case in a logical order (e.g. same
as the suggested enums value order of #29a).

~~~

35.

+ else if (apply_action == TA_SERIALIZE_TO_FILE)
+ {
+ /*
+ * Notify handle methods we're processing a remote in-progress
+ * transaction.
+ */
+ in_streamed_transaction = true;
+
+ /*
+ * Since no parallel apply worker is available for the first
+ * stream start, serialize all the changes of the transaction.
+ *

"Since no parallel apply worker is available".

I don't think the comment is quite correct. Maybe it is doing the
serialization because the user simply did not request to use the
parallel mode at all?

~~~

36. src/backend/replication/logical/worker.c - apply_handle_stream_stop

(same as comment #32)

Now that there is an apply_action enum I felt it is better for this
code to be using a switch instead of all the if/else. Furthermore, it
might be better to put the switch case in a logical order (e.g. same
as the suggested enums value order of #29a).

~~~

37. src/backend/replication/logical/worker.c - apply_handle_stream_abort

+ /*
+ * Check whether the publisher sends abort_lsn and abort_time.
+ *
+ * Note that the paralle apply worker is only started when the publisher
+ * sends abort_lsn and abort_time.
+ */

typo "paralle"

~~~

38.

(same as comment #32)

Now that there is an apply_action enum I felt it is better for this
code to be using a switch instead of all the if/else. Furthermore, it
might be better to put the switch case in a logical order (e.g. same
as the suggested enums value order of #29a).

~~~

39.

+ /*
+ * Set in_parallel_apply_xact to true again as we only aborted the
+ * subtransaction and the top transaction is still in progress. No
+ * need to lock here because currently only the apply leader are
+ * accessing this flag.
+ */

"are accessing" -> "is accessing"

~~~

40. src/backend/replication/logical/worker.c - apply_handle_stream_commit

(same as comment #32)

Now that there is an apply_action enum I felt it is better for this
code to be using a switch instead of all the if/else. Furthermore, it
might be better to put the switch case in a logical order (e.g. same
as the suggested enums value order of #29a).

~~~

41. src/backend/replication/logical/worker.c - store_flush_position

+ /* Skip if not the leader apply worker */
+ if (am_parallel_apply_worker())
+ return;
+

Code might be better to implement/use a new function so it can check
something like !am_leader_apply_worker()

~~~

42. src/backend/replication/logical/worker.c - InitializeApplyWorker

+/*
+ * Initialize the database connection, in-memory subscription and necessary
+ * config options.
+ */

I still think this should mention that this is common initialization
code for "both leader apply workers, and parallel apply workers"

~~~

43. src/backend/replication/logical/worker.c - ApplyWorkerMain

- /* This is main apply worker */
+ /* This is leader apply worker */

"is leader" -> "is the leader"

~~~

44. src/backend/replication/logical/worker.c - IsLogicalParallelApplyWorker

+/*
+ * Is current process a logical replication parallel apply worker?
+ */
+bool
+IsLogicalParallelApplyWorker(void)
+{
+ return am_parallel_apply_worker();
+}
+

It seems a bit strange to have this function
IsLogicalParallelApplyWorker, and also am_parallel_apply_worker()
which are basically identical except one of them is static and one is
not.

I wonder if there should be just one function. And if you really do
need 2 names for consistency then you can just define a synonym like

#define am_parallel_apply_worker IsLogicalParallelApplyWorker

~~~

45. src/backend/replication/logical/worker.c - get_transaction_apply_action

+/*
+ * Return the action to take for the given transaction. Also return the
+ * parallel apply worker information if the action is
+ * TA_SEND_TO_PARALLEL_WORKER.
+ */
+static TransactionApplyAction
+get_transaction_apply_action(TransactionId xid,
ParallelApplyWorkerInfo **winfo)

I think this should be slightly more clear to say that *winfo is
assigned to the destination parallel worker info (if the action is
TA_SEND_TO_PARALLEL_WORKER), otherwise *winfo is assigned NULL (see
also #46 below)

~~~

46.

+static TransactionApplyAction
+get_transaction_apply_action(TransactionId xid,
ParallelApplyWorkerInfo **winfo)
+{
+ if (am_parallel_apply_worker())
+ return TA_APPLY_IN_PARALLEL_WORKER;
+ else if (in_remote_transaction)
+ return TA_APPLY_IN_LEADER_WORKER;
+
+ /*
+ * Check if we are processing this transaction using a parallel apply
+ * worker and if so, send the changes to that worker.
+ */
+ else if ((*winfo = parallel_apply_find_worker(xid)))
+ return TA_SEND_TO_PARALLEL_WORKER;
+ else
+ return TA_SERIALIZE_TO_FILE;
+}

The code is a bit quirky at the moment because sometimes the *winfo
will be assigned NULL and sometimes it will be assigned valid value,
and sometimes it will still be unassigned.

I suggest always assigning it either NULL or valid.

SUGGESTIONS
static TransactionApplyAction
get_transaction_apply_action(TransactionId xid, ParallelApplyWorkerInfo **winfo)
{
*winfo = NULL; <== add this default assignment
...

======

47. src/backend/storage/ipc/procsignal.c - procsignal_sigusr1_handler

@@ -657,6 +658,9 @@ procsignal_sigusr1_handler(SIGNAL_ARGS)
if (CheckProcSignal(PROCSIG_LOG_MEMORY_CONTEXT))
HandleLogMemoryContextInterrupt();

+ if (CheckProcSignal(PROCSIG_PARALLEL_APPLY_MESSAGE))
+ HandleParallelApplyMessageInterrupt();
+

I wasn’t sure about the placement of this new code because those
CheckProcSignal don’t seem to have any particular order. I think this
belongs adjacent to the PROCSIG_PARALLEL_MESSAGE since it has the most
in common with that one.

======

48. src/backend/tcop/postgres.c

@@ -3377,6 +3377,9 @@ ProcessInterrupts(void)

if (LogMemoryContextPending)
ProcessLogMemoryContextInterrupt();
+
+ if (ParallelApplyMessagePending)
+ HandleParallelApplyMessages();

(like #47)

I think this belongs adjacent to the ParallelMessagePending check
since it has most in common with that one.

======

49. src/include/replication/worker_internal.h

@@ -60,6 +64,12 @@ typedef struct LogicalRepWorker
*/
FileSet *stream_fileset;

+ /*
+ * PID of leader apply worker if this slot is used for a parallel apply
+ * worker, InvalidPid otherwise.
+ */
+ pid_t apply_leader_pid;
+
/* Stats. */
XLogRecPtr last_lsn;
TimestampTz last_send_time;
Whitespace indent of the new member ok?

~~~

50.

+typedef struct ParallelApplyWorkerShared
+{
+ slock_t mutex;
+
+ /*
+ * Flag used to ensure commit ordering.
+ *
+ * The parallel apply worker will set it to false after handling the
+ * transaction finish commands while the apply leader will wait for it to
+ * become false before proceeding in transaction finish commands (e.g.
+ * STREAM_COMMIT/STREAM_ABORT/STREAM_PREPARE).
+ */
+ bool in_parallel_apply_xact;
+
+ /* Information from the corresponding LogicalRepWorker slot. */
+ uint16 logicalrep_worker_generation;
+
+ int logicalrep_worker_slot_no;
+} ParallelApplyWorkerShared;

Whitespace indents of the new members ok?

~~~

51.

/* Main memory context for apply worker. Permanent during worker lifetime. */
extern PGDLLIMPORT MemoryContext ApplyContext;
+extern PGDLLIMPORT MemoryContext ApplyMessageContext;

Maybe there should be a blank line between those externs, because the
comment applies only to the first one, right? Alternatively modify the
comment.

~~~

52. src/include/replication/worker_internal.h - am_parallel_apply_worker

I thought it might be worthwhile to also add another function like
am_leader_apply_worker(). I noticed at least one place in this patch
where it could have been called.

SUGGESTION
static inline bool
am_parallel_apply_worker(void)
{
return !isParallelApplyWorker(MyLogicalRepWorker) && !am_tablesync_worker();
}

======

53. src/include/storage/procsignal.h

@@ -35,6 +35,7 @@ typedef enum
PROCSIG_WALSND_INIT_STOPPING, /* ask walsenders to prepare for shutdown */
PROCSIG_BARRIER, /* global barrier interrupt */
PROCSIG_LOG_MEMORY_CONTEXT, /* ask backend to log the memory contexts */
+ PROCSIG_PARALLEL_APPLY_MESSAGE, /* Message from parallel apply workers */

(like #47)

I think this new enum belongs adjacent to the PROCSIG_PARALLEL_MESSAGE
since it has most in common with that one

======

54. src/tools/pgindent/typedefs.list

Missing TransactionApplyAction?

------
Kind Regards,
Peter Smith.
Fujitsu Australia

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Langote 2022-09-09 07:16:09 Re: [BUG] wrong FK constraint name when colliding name on ATTACH
Previous Message Kyotaro Horiguchi 2022-09-09 07:02:01 Re: Improve description of XLOG_RUNNING_XACTS