Trenta sis <trenta(dot)sis(at)gmail(dot)com> wrote:
> I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
> massive update, about 400000 updates/inserts.
Updates or Inserts?
> If I execute about 100000 it seems all ok, but when I execute 400000, I have
> the same problem with or without a transaction (I need to do with a
> transaction) increase memory usage and disk usage.
> With a execution of 400.000 inserts/update server begin woring well, but after
> 100 seconds of executions increase usage of RAM, and then Swap and finally all
> RAM and swap are used and execution can't finish.
> I have made some tuning in server, I have modified:
> -shared_buffers 1024 Mb
> -work_mem 512 Mb
Way too high, but that's not the problem here... (i guess, depends on
the real query, see below about explain analyse)
> -effective_cache_size 2048Mb
You have 4GB, but you are defined only 1 GByte for shared_mem and you
have defined only 2GB for shared_mem and os-cache together. What about
the other 2 GByte?
> -random_page_cost 2.0
you have changed the default, why?
> -checkpoint_segments 64
> -wal_buffers 8Mb
> -max_prepared_transaction 100
> -synchronous_commit off
> what is wrong in this configuration to executes this inserts/update?
Hard to guess, can you provide the output generated from
EXPLAIN ANALYSE <your query>?
> Server has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5
RAID5 isn't a good choise for a database server...
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect. (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly." (unknown)
Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°
In response to
pgsql-performance by date
|Next:||From: Scott Marlowe||Date: 2010-10-27 19:17:20|
|Subject: Re: CPUs for new databases|
|Previous:||From: Jon Nelson||Date: 2010-10-27 18:52:05|
|Subject: Re: temporary tables, indexes, and query plans|