From: | Patrick Hatcher <pathat(at)comcast(dot)net> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Slow update statement |
Date: | 2005-08-06 13:16:02 |
Message-ID: | 42F4B812.7070501@comcast.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
[Reposted from General section with updated information]
Pg 7.4.5
I'm running an update statement on about 12 million records using the
following query:
Update table_A
set F1 = b.new_data
from table_B b
where b.keyfield = table_A.keyfield
both keyfields are indexed, all other keys in table_A were dropped, yet this job has been running over 15 hours. Is
this normal?
I stopped the process the first time after 3 hours of running due to excessive log rotation and reset the conf file to these settings:
wal_buffers = 64 # min 4, 8KB each
# - Checkpoints -
checkpoint_segments = 128 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 1800 # range 30-3600, in seconds
#checkpoint_warning = 30 # 0 is off, in seconds
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
Would it just be quicker to run a JOIN statement to a temp file and then reinsert?
TIA
Patrick
From | Date | Subject | |
---|---|---|---|
Next Message | John A Meinel | 2005-08-06 13:34:25 | Re: Slow update statement |
Previous Message | Tom Lane | 2005-08-06 05:03:51 | Re: Why hash join instead of nested loop? |