From: | Tom Allison <tallison(at)tacocat(dot)net> |
---|---|
To: | pgsql-novice(at)postgresql(dot)org |
Subject: | iowait |
Date: | 2006-06-09 23:44:33 |
Message-ID: | 448A07E1.8090207@tacocat.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
I'm not sure if this is typical or not but:
I am running a command:
COPY email (address,domain) FROM '/tmp/temp.dat';
That is reading in some 1.7 million rows to a table which is defined as:
create table email (
address varchar(100) primary key,
domain varchar(100),
status smallint default 0,
reason varchar(64),
last_timestamp timestamp default now(),
created_timestamp timestamp default now(),
retry_count smallint default 0
);
I am running it under 'BEGIN' so in theory autocommit is off, which seems to be
the case since there aren't any rows to query.
I've increased the mem_sort to 65535 or 65536.
The whole thing is writing to a partition on a relatively simple EIDE disk.
My iowait is 95-95% according to iostat and it's all pointed at this one disk
partition. And it's taking a REALLY long time.
Question:
Is this normal for a large data load like this?
Is there something "obvious" I could do in the future to better the situation?
Or am I simply bound by the type of hardware I'm running it on?
Are the number of default values and primary key index going to be the death of me?
From | Date | Subject | |
---|---|---|---|
Next Message | Alan Hodgson | 2006-06-10 00:02:38 | Re: iowait |
Previous Message | Adam Witney | 2006-06-09 13:13:04 | Re: CREATE ROLE and pg_dump |