Re: a heavy duty operation on an "unused" table kills my server
| От | Tom Lane |
|---|---|
| Тема | Re: a heavy duty operation on an "unused" table kills my server |
| Дата | |
| Msg-id | 29052.1263619086@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: a heavy duty operation on an "unused" table kills my server (Greg Smith <greg@2ndquadrant.com>) |
| Ответы |
Re: a heavy duty operation on an "unused" table kills my
server
|
| Список | pgsql-performance |
Greg Smith <greg@2ndquadrant.com> writes:
> You might note that only one of these sources--a backend allocating a
> buffer--is connected to the process you want to limit. If you think of
> the problem from that side, it actually becomes possible to do something
> useful here. The most practical way to throttle something down without
> a complete database redesign is to attack the problem via allocation.
> If you limited the rate of how many buffers a backend was allowed to
> allocate and dirty in the first place, that would be extremely effective
> in limiting its potential damage to I/O too, albeit indirectly.
This is in fact exactly what the vacuum_cost_delay logic does.
It might be interesting to investigate generalizing that logic
so that it could throttle all of a backend's I/O not just vacuum.
In principle I think it ought to work all right for any I/O-bound
query.
But, as noted upthread, this is not high on the priority list
of any of the major developers.
regards, tom lane
В списке pgsql-performance по дате отправления: