| From: | Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com> |
|---|---|
| To: | vignesh C <vignesh21(at)gmail(dot)com> |
| Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com> |
| Subject: | Re: Parallel worker hangs while handling errors. |
| Date: | 2020-07-09 09:42:57 |
| Message-ID: | CALj2ACVe+rPYUhxO45LfgYQYzjqERmPyh_uDvHvP4wje-dKWgQ@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
>
> Parallel worker hangs while handling errors.
>
> When there is an error in the parallel worker process, we will call
> ereport/elog with the error message. Worker will then jump from
> errfinish to setjmp in StartBackgroundWorker function which was set
> earlier. Then the worker process will then send the error message
> through the shared memory to the leader process. Shared memory size is
> ok 16K, if the error message is less than 16K it works fine.
I reproduced the hang issue with the parallel copy patches[1]. The use
case is as follows - one of the parallel workers tries to report error
to the leader process and as part of the error context it also tries
to send the entire row/tuple data(which is a lot more than 16KB).
The fix provided here solves the above problem, i.e. no hang occurs,
and the entire tuple/row data in the error from worker to leader gets
transferred, see the attachment "testcase.text" for the output.
Apart from that, I also executed the regression tests (make check and
make check-world) on the patch, no issues are observed.
With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com
| Attachment | Content-Type | Size |
|---|---|---|
| testcase.txt | text/plain | 224.8 KB |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Bharath Rupireddy | 2020-07-09 10:33:42 | Re: [PATCH] Performance Improvement For Copy From Binary Files |
| Previous Message | Peter Eisentraut | 2020-07-09 09:17:32 | Log the location field before any backtrace |