Performance of batch COMMIT

From: "Benjamin Arai" <barai(at)cs(dot)ucr(dot)edu>
To: <pgsql-general(at)postgresql(dot)org>
Subject: Performance of batch COMMIT
Date: 2005-12-19 19:44:15
Message-ID: 007801c604d4$9a965880$d7cc178a@uni
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Each week I have to update a very large database. Currently I run a commit
about every 1000 queries. This vastly increased performance but I am
wondering if the performance can be increased further. I could send all of
the queries to a file but COPY doesn't support plain queries such as UPDATE,
so I don't think that is going to help. The only time I have to run a
commit is when I need to make a new table. The server has 4GB of memory and
fast everything else. The only postgresql.conf variable I have changed is
for the shared_memory.

Would sending all of the queries in a single query string increase
performance?

What is the optimal batch size for commits?

Are there any postgresql.conf variable that should be tweaked?

Anybody have any suggestions?

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Peter Eisentraut 2005-12-19 21:53:58 Re: Installation trouble - Solved
Previous Message Martijn van Oosterhout 2005-12-19 19:37:26 Re: is this a bug or I am blind?