Thanks Tom, clock_timestamp() worked. Appreciate it!!! and Sorry was hurrying to get this done at work and hence did not read through.
Can you comment on how you would solve the original problem? Even if I can get the 11 seconds down to 500 ms for one pair, running it for 300k pairs will take multiple hours. How can one write a combination of a bash script/pgplsql code so as to use all 8 cores of a server. I am seeing that this is just executing in one session/process.
thanks and regards, Venki
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Venki Ramachandran <venki_ramachandran(at)yahoo(dot)com>
Cc: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>; Samuel Gendler <sgendler(at)ideasculptor(dot)com>; "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Sent: Wednesday, April 25, 2012 2:52 PM
Subject: Re: [PERFORM] Parallel Scaling of a pgplsql problem
Venki Ramachandran <venki_ramachandran(at)yahoo(dot)com> writes:
> Replacing current_timestamp() with transaction_timestamp() and statement_timestamp() did not help!!!.
You did not read the documentation you were pointed to. Use
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Jan Nielsen||Date: 2012-04-26 03:41:13|
|Subject: Re: Parallel Scaling of a pgplsql problem|
|Previous:||From: Greg Smith||Date: 2012-04-26 00:11:23|
|Subject: Re: Configuration Recommendations|