Re: Slowdown problem when writing 1.7million records

From: Emmanuel Charpentier <charpent(at)bacbuc(dot)dyndns(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Slowdown problem when writing 1.7million records
Date: 2001-02-27 12:42:57
Message-ID: 3A9BA0D1.827848FA@bacbuc.dyndns.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Stephen Livesey wrote:
>
> I am very new to PostgreSQL and have installed v7.03 on a Red Hat Linux
> Server (v6.2), I am accessing the files using JDBC from a Windows 2000 PC.
>
> I have created a small file as follows:
> CREATE TABLE expafh (
> postcode CHAR(8) NOT NULL,
> postcode_record_no INT,
> street_name CHAR(30),
> town CHAR(31),
> PRIMARY KEY(postcode) )
>
> I am now writing 1.7million records to this file.
>
> The first 100,000 records took 15mins.
> The next 100,000 records took 30mins
> The last 100,000 records took 4hours.
>
> In total, it took 43 hours to write 1.7million records.
>
> Is this sort of degradation normal using a PostgreSQL database?

AFAICT, no.

> I have never experienced this sort of degradation with any other database
> and I have done exactly the same test (using the same hardware) on the
> following databases:
> DB2 v7 in total took 10hours 6mins
> Oracle 8i in total took 3hours 20mins
> Interbase v6 in total took 1hr 41min
> MySQL v3.23 in total took 54mins
>
> Any Help or advise would be appreciated.

Did you "vacuum analyse" your DB ? This seems to be essential to PG
performance, for various reasons.

Do you have a unique index on your primary key ?

HTH,

Emmanuel Charpentier

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Robert Schrem 2001-02-27 13:11:04 Re: Query precompilation?
Previous Message Bruce Richardson 2001-02-27 12:30:31 Re: Problems with RAISE EXCEPTION