Sorry for my poor english,
My problem :
I meet some performance problem during load increase.
massive update of 50.000.000 records and 2.000.000 insert with a weekly
frequency in a huge table (+50.000.000 records, ten fields, 12 Go on hard disk)
current performance obtained : 120 records / s
At the beginning, I got a better speed : 1400 records/s
CPU : bi xeon 2.40GHz (cache de 512KB)
postgresql version : 8.1.4
OS : debian Linux sa 2.6.17-mm2
Hard disk scsi U320 with scsi card U160 on software RAID 1
Memory : only 1 Go at this time.
My database contains less than ten tables. But the main table takes more than 12
Go on harddisk. This table has got ten text records and two date records.
I use few connection on this database.
I try many ideas :
- put severals thousands operations into transaction (with BEGIN and COMMIT)
- modify parameters in postgres.conf like
shared_buffers (several tests with 30000 50000 75000)
fsync = off
checkpoint_segments = 10 (several tests with 20 - 30)
checkpoint_timeout = 1000 (30-1800)
stats_start_collector = off
unfortunately, I can't use another disk for pg_xlog file.
But I did not obtain a convincing result
My program does some resquest quite simple.
It does some
UPDATE table set dat_update=current_date where id=XXXX ;
And if not found
id does some
insert into table
My sysadmin tells me write/read on hard disk aren't the pb (see with iostat)
Have you got some idea to increase performance for my problem ?
pgsql-performance by date
|Next:||From: Sven Geisler||Date: 2006-07-26 16:02:27|
|Subject: Re: loading increase into huge table with 50.000.000 records|
|Previous:||From: Andrew Hammond||Date: 2006-07-25 20:48:28|
|Subject: Re: Partitioned tables in queries|