From: | Robert Klemme <shortcutter(at)googlemail(dot)com> |
---|---|
To: | tv(at)fuzzy(dot)cz |
Cc: | Harry Mantheakis <harry(dot)mantheakis(at)riskcontrollimited(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Long Running Update - My Solution |
Date: | 2011-06-27 19:29:21 |
Message-ID: | BANLkTim++zcMMhQwQ77U4c3_QgXJL4Sapw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Jun 27, 2011 at 5:37 PM, <tv(at)fuzzy(dot)cz> wrote:
>> The mystery remains, for me: why updating 100,000 records could complete
>> in as quickly as 5 seconds, whereas an attempt to update a million
>> records was still running after 25 minutes before we killed it?
>
> Hi, there's a lot of possible causes. Usually this is caused by a plan
> change - imagine for example that you need to sort a table and the amount
> of data just fits into work_mem, so that it can be sorted in memory. If
> you need to perform the same query with 10x the data, you'll have to sort
> the data on disk. Which is way slower, of course.
>
> And there are other such problems ...
I would rather assume it is one of the "other problems", typically
related to handling the TX (e.g. checkpoints, WAL, creating copies of
modified records and adjusting indexes...).
Kind regards
robert
--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2011-06-27 19:46:12 | Re: Performance issue with Insert |
Previous Message | Merlin Moncure | 2011-06-27 16:12:02 | Re: Performance issue with Insert |