Re: Slowdown problem when writing 1.7million records

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Stephen Livesey" <ste(at)exact3ex(dot)co(dot)uk>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Slowdown problem when writing 1.7million records
Date: 2001-02-27 19:25:41
Message-ID: 7749.983301941@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Stephen Livesey" <ste(at)exact3ex(dot)co(dot)uk> writes:
> I have created a small file as follows:
> CREATE TABLE expafh (
> postcode CHAR(8) NOT NULL,
> postcode_record_no INT,
> street_name CHAR(30),
> town CHAR(31),
> PRIMARY KEY(postcode) )

> I am now writing 1.7million records to this file.

> The first 100,000 records took 15mins.
> The next 100,000 records took 30mins
> The last 100,000 records took 4hours.

> In total, it took 43 hours to write 1.7million records.

> Is this sort of degradation normal using a PostgreSQL database?

No, it's not. Do you have any triggers or rules on this table that
you haven't shown us? How about other tables referencing this one
as foreign keys? (Probably not, if you're running an identical test
on MySQL, but I just want to be sure that I'm not missing something.)

How exactly are you writing the records?

I have a suspicion that the slowdown must be on the client side (perhaps
some inefficiency in the JDBC code?) but that's only a guess at this
point.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David Lynn 2001-02-27 19:26:05 Odd behavior with views and numeric columns
Previous Message Brent R. Matzelle 2001-02-27 19:05:54 Re: vacuum and backup