From: | Pavel Velikhov <pvelikhov(at)yahoo(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | need help with a query |
Date: | 2007-10-19 15:05:15 |
Message-ID: | 878599.37701.qm@web56406.mail.re3.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
I am updating a big table (90M records) with data from another rather large table (4M entries). Here is my update query:
update links set target_size =
( select size from articles where articles.article_id = links.article_to)
I have built all the indexes one might need, increased shared mem buffers to 400MB, I looked at the query plan and it looks reasonable.
But its taking an eternity to run: I've been running the query for 3 hours now on my new Mac laptop, and looking at the activity monitor I see that postrges is spending all of this time in disk IO (average CPU load of postgres process is about 4-5%).
However, just looking at the query, postgres could cache the articles table and do a single pass over the links table...
Please let me know if there is a good solution to this.
Thanks!
Pavel Velikhov
Institute of Systems Programming
Russian Academy of Sciences
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jonah H. Harris | 2007-10-19 15:52:58 | Re: need help with a query |
Previous Message | Heikki Linnakangas | 2007-10-19 14:50:07 | Re: how to improve the performance of creating index |