Re: Help..Help...

From: "Shridhar Daithankar" <shridhar_daithankar(at)persistent(dot)co(dot)in>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Help..Help...
Date: 2002-11-13 14:02:13
Message-ID: 3DD2A8BD.6022.56D203@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 13 Nov 2002 at 19:14, Murali Mohan Kasetty wrote:

> We are running two processes accessing the same table using JDBC. Both
> the
> processes updates records in the same table. The same rows will not be
> updated by the processes at the same time.
>
> When the processes are run concurrently, the time taken is X seconds
> each.
> But, when we run the same processes together, we are seeing that the
> time
> taken is worse than 2X.

Update generates dead tuples which causes performance slowdown. Run vacuum
analyze concurrently in background so that these dead tuples are available for
reuse.

>
> Is it possible that there is a contention that is occuring while the
> records
> are being written. Has anybody experienced a similar problem. What is
> the

I am sure that's not the case. Are you doing rapind updates. Practiacally you
should run vacuum analyze for each 1000 updates to keep performance maximum.
Tune this figure to suit your need..

> LOCK mechanism that is used by PostgreSQL.

Go thr. MVCC. It's documented in postgresql manual.

HTH

Bye
Shridhar

--
mixed emotions: Watching a bus-load of lawyers plunge off a cliff. With five
empty seats.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Shridhar Daithankar 2002-11-13 14:02:14 Re: error: lost syncronization with server
Previous Message Aurangzeb M. Agha 2002-11-13 14:01:23 Re: error: lost syncronization with server