I'm not sure if this is typical or not but:
I am running a command:
COPY email (address,domain) FROM '/tmp/temp.dat';
That is reading in some 1.7 million rows to a table which is defined as:
create table email (
address varchar(100) primary key,
status smallint default 0,
last_timestamp timestamp default now(),
created_timestamp timestamp default now(),
retry_count smallint default 0
I am running it under 'BEGIN' so in theory autocommit is off, which seems to be
the case since there aren't any rows to query.
I've increased the mem_sort to 65535 or 65536.
The whole thing is writing to a partition on a relatively simple EIDE disk.
My iowait is 95-95% according to iostat and it's all pointed at this one disk
partition. And it's taking a REALLY long time.
Is this normal for a large data load like this?
Is there something "obvious" I could do in the future to better the situation?
Or am I simply bound by the type of hardware I'm running it on?
Are the number of default values and primary key index going to be the death of me?
- Re: iowait at 2006-06-10 00:02:38 from Alan Hodgson
pgsql-novice by date
|Next:||From: Alan Hodgson||Date: 2006-06-10 00:02:38|
|Subject: Re: iowait|
|Previous:||From: Adam Witney||Date: 2006-06-09 13:13:04|
|Subject: Re: CREATE ROLE and pg_dump|