Re: Long Running Update

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Harry Mantheakis" <harry(dot)mantheakis(at)riskcontrollimited(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Long Running Update
Date: 2011-06-24 14:00:23
Message-ID: 4E045227020000250003EBA2@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Harry Mantheakis <harry(dot)mantheakis(at)riskcontrollimited(dot)com> wrote:

> IOW how do I convert - guesstimate! - these numbers into
> (plausible) time values?

They are abstract and it only matters that they are in the right
ratios to one another so that the planner can accurately pick the
cheapest plan. With the default settings, seq_page_cost is 1, so if
everything is tuned perfectly, the run time should match the time it
takes to sequentially read a number of pages (normally 8KB) which
matches the estimated cost. So with 8KB pages and seq_page_cost =
1, the cost number says it should take the same amount of time as a
sequential read of 130 GB.

The biggest reason this won't be close to actual run time is that is
that the planner just estimates the cost of *getting to* the correct
tuples for update, implicitly assuming that the actual cost of the
updates will be the same regardless of how you find the tuples to be
updated. So if your costs were set in perfect proportion to
reality, with seq_page_cost = 1, the above would tell you how fast a
SELECT of the data to be updated should be. The cost numbers don't
really give a clue about the time to actually UPDATE the rows.

-Kevin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Devrim GÜNDÜZ 2011-06-24 14:43:14 Cost of creating an emply WAL segment
Previous Message Kevin Grittner 2011-06-24 13:43:08 Re: Long Running Update