2010/10/28 Trenta sis <trenta(dot)sis(at)gmail(dot)com>:
> There are about 100.000 inserts and 300000 updates. Without transaction it
> seems that works, but with a transaction no. Witt about only 300.000 updates
> it seems that can finish correctly, but last 20% is slow because is using
> Any tunning to do in this configuration or it is correct?
You should post your queries, and tables definitions involved.
> 2010/10/28 Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
>> On 10/28/2010 02:38 AM, Trenta sis wrote:
>>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems
>>> with a massive update, about 400000 updates/inserts.
>>> If I execute about 100000 it seems all ok, but when I execute 400000, I
>>> have the same problem with or without a transaction (I need to do with a
>>> transaction) increase memory usage and disk usage.
>>> With a execution of 400.000 inserts/update server begin woring well, but
>>> after 100 seconds of executions increase usage of RAM, and then Swap and
>>> finally all RAM and swap are used and execution can't finish.
>> Do you have lots of triggers on the table? Or foreign key relationships
>> that're DEFERRABLE ?
>> Craig Ringer
Cédric Villemain 2ndQuadrant
http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
In response to
pgsql-performance by date
|Next:||From: Robert Haas||Date: 2010-10-28 16:18:31|
|Subject: Re: Slow Query- Simple taking|
|Previous:||From: Cédric Villemain||Date: 2010-10-28 15:31:29|
|Subject: Re: How does PG know if data is in memory?|