You could set up a queuing table to hold the product id's that need processing, removing that parameter from the proc (or all of them if all columns vary and are in the queuing table).  Then the proc picks one product_id from the queue (using delete and capturing the data deleted data using RETURNING), processes it, then loops for the next product, terminating when there are none.  A separate job runs to add new products needing processing to the queue table.  This technique allows you to run as many simultaneous jobs as you need to go through all of the products in a timely manner without code changes.  Also, if one fails, the others will pick up the slack since they all run until the queue is empty.  I've used this technique before (although not in Postgres) and it works well.


On 12/8/24 10:26 AM, David G. Johnston wrote:
On Sunday, December 8, 2024, kunwar singh <krishsingh.111@gmail.com> wrote:

I know I can create a bash script or Python script , but I am wondering if there is a smarter way to do it in Postgres?


Your concurrency requirement makes doing it in the server quite difficult.  Using anything that can launch multiple processes/threads and initiate one connect each is your best option.  Many things can, so pick one you are familiar with.  There is little complexity here that specialized tooling would be needed for.

David J.