From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Remi Colinet <remi(dot)colinet(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PATCH v2] Progress command to monitor progression of long running SQL queries |
Date: | 2017-05-13 12:38:30 |
Message-ID: | CAA4eK1LoRV4mnqX9GKBhi3gO_1AHGs5JAyK-gwpsKC2FR=z7SA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, May 10, 2017 at 10:10 PM, Remi Colinet <remi(dot)colinet(at)gmail(dot)com> wrote:
>
> Parallel queries can also be monitored. The same mecanism is used to monitor
> child workers with a slight difference: the main worker requests the child
> progression directly in order to dump the whole progress tree in shared
> memory.
>
What if there is any error in the worker (like "out of memory") while
gathering the statistics? It seems both for workers as well as for
the main backend it will just error out. I am not sure if it is a
good idea to error out the backend or parallel worker as it will just
end the query execution. Also, even if it is okay, there doesn't seem
to be a way by which a parallel worker can communicate the error back
to master backend, rather it will just exit silently which is not
right.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-05-13 14:29:09 | Re: Hash Functions |
Previous Message | Amit Kapila | 2017-05-13 11:27:45 | Re: Moving relation extension locks out of heavyweight lock manager |