Re: slow speeds after 2 million rows inserted

From: Nikola Milutinovic <alokin1(at)yahoo(dot)com>
To: Chad Wagner <chad(dot)wagner(at)gmail(dot)com>
Cc: PostgreSQL general <pgsql-general(at)postgresql(dot)org>
Subject: Re: slow speeds after 2 million rows inserted
Date: 2006-12-31 16:18:17
Message-ID: 20061231161817.91257.qmail@web58710.mail.re1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> 1. There is no difference (speed-wise) between committing every 1K or every 250K rows.

It was really some time ago, since I have experimented with this. My las experiment was on PG 7.2 or 7.3. I was inserting cca 800,000 rows. Inserting without transactions took 25 hrs. Inserting with 10,000 rows per transaction took about 2.5 hrs. So, the speedup was 10x. I have not experimented with the transaction batch size, but I suspect that 1,000 would not show much speedup.

> 2. Vacuuming also makes no difference for a heavy insert-only table, only slows it down.

Makes sense. Since my application was dumping all records each month and inserting new ones, vacuum was really needed, but no speedup.

> 3. Table size plays no real factor.

The reason I saw speedup, must have to do with the fact that without transactions, each insert was it's own transaction. That was eating resources.

Nix.

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

Responses

Browse pgsql-general by date

  From Date Subject
Next Message novnov 2006-12-31 16:48:07 Re: Generic timestamp function for updates where field
Previous Message Christopher Browne 2006-12-31 15:06:45 Re: Autovacuum Improvements