On 11/29/2010 08:11 AM, Mark Kirkwood wrote:
> On 29/11/10 00:46, Mario Splivalo wrote:
>> This is the slow part:
>> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
>> drone_temperature, drone_pressure)
>> SELECT * FROM tmpUpdate;
>> For 100 rows this takes around 2 seconds. For 1000 rows this takes
>> around 40 seconds. For 5000 rows this takes around 5 minutes.
>> For 50k rows this takes around 30 minutes! Now this is where I start
>> lag because I get new CSV every 10 minutes or so.
> Have you created indexes on drones_history(sample_id) and
> drones_history(drone_id)? That would probably help speed up your INSERT
> quite a bit if you have not done so.
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
> Also it would be worthwhile for you to post the output of:
> EXPLAIN ANALYZE INSERT INTO drones_history (sample_id, drone_id,
> drone_log_notice, drone_temperature, drone_pressure)
> SELECT * FROM tmpUpdate;
> to the list, so we can see what is taking the time.
Is there a way to do so inside plpgsql function?
I can recreate the whole process within psql and then post the explain
analyze, it would just take me some time to do so. I'll post as soon as
In response to
pgsql-performance by date
|Next:||From: Oleg Bartunov||Date: 2010-11-29 12:37:54|
|Subject: Re: Full Text index is not using during OR operation|
|Previous:||From: Mario Splivalo||Date: 2010-11-29 12:23:51|
|Subject: Re: SELECT INTO large FKyed table is slow|