Skip site navigation (1) Skip section navigation (2)

Re: a heavy duty operation on an "unused" table kills my server

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Greg Smith <greg(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>, Eduardo Piombino <drakorg(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: a heavy duty operation on an "unused" table kills my server
Date: 2010-01-16 05:18:06
Message-ID: 29052.1263619086@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-performance
Greg Smith <greg(at)2ndquadrant(dot)com> writes:
> You might note that only one of these sources--a backend allocating a 
> buffer--is connected to the process you want to limit.  If you think of 
> the problem from that side, it actually becomes possible to do something 
> useful here.  The most practical way to throttle something down without 
> a complete database redesign is to attack the problem via allocation.  
> If you limited the rate of how many buffers a backend was allowed to 
> allocate and dirty in the first place, that would be extremely effective 
> in limiting its potential damage to I/O too, albeit indirectly.

This is in fact exactly what the vacuum_cost_delay logic does.
It might be interesting to investigate generalizing that logic
so that it could throttle all of a backend's I/O not just vacuum.
In principle I think it ought to work all right for any I/O-bound
query.

But, as noted upthread, this is not high on the priority list
of any of the major developers.

			regards, tom lane

In response to

Responses

pgsql-performance by date

Next:From: Greg SmithDate: 2010-01-16 09:09:26
Subject: Re: a heavy duty operation on an "unused" table kills my server
Previous:From: Greg SmithDate: 2010-01-16 04:43:55
Subject: Re: a heavy duty operation on an "unused" table kills my server

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group