Re: loading increase into huge table with 50.000.000 records

From: Sven Geisler <sgeisler(at)aeccom(dot)com>
To: nuggets72(at)free(dot)fr
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: loading increase into huge table with 50.000.000 records
Date: 2006-07-26 16:02:27
Message-ID: 44C79213.6050308@aeccom.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi Larry,

Do you run vacuum and analyze frequently?
Did you check PowerPostgresql.com for hints about PostgreSQL tuning?
<http://www.powerpostgresql.com/Docs/>

You can increase wal_buffers, checkpoint_segments and checkpoint_timeout
much higher.

Here is a sample which works for me.
wal_buffers = 128
checkpoint_segments = 256
checkpoint_timeout = 3600

Cheers
Sven.

nuggets72(at)free(dot)fr schrieb:
> Hello,
> Sorry for my poor english,
>
> My problem :
>
> I meet some performance problem during load increase.
>
> massive update of 50.000.000 records and 2.000.000 insert with a weekly
> frequency in a huge table (+50.000.000 records, ten fields, 12 Go on hard disk)
>
> current performance obtained : 120 records / s
> At the beginning, I got a better speed : 1400 records/s
>
>
> CPU : bi xeon 2.40GHz (cache de 512KB)
> postgresql version : 8.1.4
> OS : debian Linux sa 2.6.17-mm2
> Hard disk scsi U320 with scsi card U160 on software RAID 1
> Memory : only 1 Go at this time.
>
>
> My database contains less than ten tables. But the main table takes more than 12
> Go on harddisk. This table has got ten text records and two date records.
>
> I use few connection on this database.
>
> I try many ideas :
> - put severals thousands operations into transaction (with BEGIN and COMMIT)
> - modify parameters in postgres.conf like
> shared_buffers (several tests with 30000 50000 75000)
> fsync = off
> checkpoint_segments = 10 (several tests with 20 - 30)
> checkpoint_timeout = 1000 (30-1800)
> stats_start_collector = off
>
> unfortunately, I can't use another disk for pg_xlog file.
>
>
> But I did not obtain a convincing result
>
>
>
> My program does some resquest quite simple.
> It does some
> UPDATE table set dat_update=current_date where id=XXXX ;
> And if not found
> id does some
> insert into table
>
>
> My sysadmin tells me write/read on hard disk aren't the pb (see with iostat)
>
>
> Have you got some idea to increase performance for my problem ?
>
> Thanks.
>
> Larry.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster

--
/This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you are not the intended recipient, you should not
copy it, re-transmit it, use it or disclose its contents, but should
return it to the sender immediately and delete your copy from your
system. Thank you for your cooperation./

Sven Geisler <sgeisler(at)aeccom(dot)com> Tel +49.30.5362.1627 Fax .1638
Senior Developer, AEC/communications GmbH Berlin, Germany

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Markus Schaber 2006-07-26 16:39:39 Re: loading increase into huge table with 50.000.000 records
Previous Message nuggets72 2006-07-26 15:34:47 loading increase into huge table with 50.000.000 records