Well, I have solved executing with more RAM, and then works correctly
2010/10/28 Cédric Villemain <cedric(dot)villemain(dot)debian(at)gmail(dot)com>
> 2010/10/28 Trenta sis <trenta(dot)sis(at)gmail(dot)com>:
> > There are about 100.000 inserts and 300000 updates. Without transaction
> > seems that works, but with a transaction no. Witt about only 300.000
> > it seems that can finish correctly, but last 20% is slow because is using
> > swap...
> > Any tunning to do in this configuration or it is correct?
> You should post your queries, and tables definitions involved.
> > thanks
> > 2010/10/28 Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
> >> On 10/28/2010 02:38 AM, Trenta sis wrote:
> >>> Hi,
> >>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems
> >>> with a massive update, about 400000 updates/inserts.
> >>> If I execute about 100000 it seems all ok, but when I execute 400000, I
> >>> have the same problem with or without a transaction (I need to do with
> >>> transaction) increase memory usage and disk usage.
> >>> With a execution of 400.000 inserts/update server begin woring well,
> >>> after 100 seconds of executions increase usage of RAM, and then Swap
> >>> finally all RAM and swap are used and execution can't finish.
> >> Do you have lots of triggers on the table? Or foreign key relationships
> >> that're DEFERRABLE ?
> >> --
> >> Craig Ringer
> Cédric Villemain 2ndQuadrant
> http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
In response to
pgsql-performance by date
|Next:||From: Emanuele Bracci Poste||Date: 2010-10-28 22:07:29|
|Subject: Re: Massive update, memory usage|
|Previous:||From: Tom Lane||Date: 2010-10-28 21:26:17|
|Subject: Re: BBU Cache vs. spindles |