From: | "Pierre C" <lists(at)peufeu(dot)com> |
---|---|
To: | "Mario Splivalo" <mario(dot)splivalo(at)megafon(dot)hr> |
Cc: | "Mladen Gogala" <mladen(dot)gogala(at)vmsinfo(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: SELECT INTO large FKyed table is slow |
Date: | 2010-12-01 08:23:23 |
Message-ID: | op.vm0z89fgeorkce@apollo13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> Just once.
OK, another potential problem eliminated, it gets strange...
> If I have 5000 lines in CSV file (that I load into 'temporary' table
> using COPY) i can be sure that drone_id there is PK. That is because CSV
> file contains measurements from all the drones, one measurement per
> drone. I usualy have around 100 new drones, so I insert those to drones
> and to drones_history. Then I first insert into drones_history and then
> update those rows in drones. Should I try doing the other way around?
No, it doesn't really matter.
> Although, I think I'm having some disk-related problems because when
> inserting to the tables my IO troughput is pretty low. For instance,
> when I drop constraints and then recreate them that takes around 15-30
> seconds (on a 25M rows table) - disk io is steady, around 60 MB/s in
> read and write.
>
> It just could be that the ext3 partition is so fragmented. I'll try
> later this week on a new set of disks and ext4 filesystem to see how it
> goes.
If you CLUSTER a table, it is entirely rebuilt so if your disk free space
isn't heavily fragmented, you can hope the table and indexes will get
allocated in a nice contiguous segment.
From | Date | Subject | |
---|---|---|---|
Next Message | Pierre C | 2010-12-01 08:43:07 | Re: SELECT INTO large FKyed table is slow |
Previous Message | Pierre C | 2010-12-01 08:12:08 | Re: BBU Cache vs. spindles |