Re: Concurrent CREATE INDEX, try 2 (was Re: Reducing

Поиск
Список
Период
Сортировка
От Hannu Krosing
Тема Re: Concurrent CREATE INDEX, try 2 (was Re: Reducing
Дата
Msg-id 1133987593.3641.18.camel@localhost.localdomain
обсуждение исходный текст
Ответ на Re: Concurrent CREATE INDEX, try 2 (was Re: Reducing  (Greg Stark <gsstark@mit.edu>)
Список pgsql-hackers
Ühel kenal päeval, K, 2005-12-07 kell 13:36, kirjutas Greg Stark:
> Hannu Krosing <hannu@skype.net> writes:
> 
> > > But that said, realistically *any* solution has to obtain a lock at some time
> > > to make the schema change. I would say pretty much any O(1) (constant time)
> > > outage is at least somewhat acceptable as contrasted with the normal index
> > > build which locks out other writers for at least O(n lg n) time. Anything on
> > > the order of 100ms is probably as good as it gets here.
> > 
> > For me any delay less than the client timeout is acceptable and anything
> > more than that is not. N sec is ok, N+1 is not. It's as simple as that.
> 
> I don't think the client timeout is directly relevant here. 

It is relevant. It is the ultimate check of success or failure :)

> If your client
> timeout is 20s and you take 19s, how many requests have queued up behind you?
> If you normally process requests in under 200ms and receive 10 requests per
> second (handling at least 2 simultaneously) then you now have 190 requests
> queued up.

Again, I'm handling 20 to 200 simultaneously quite nicely.

> Those requests take resources and will slow down your server. If
> they slow things down too much then you will start failing to meet your 200ms
> deadline.

If I can't meet the deadline, I've got a problem. The rest is
implementation detail.

> It's more likely that your system is engineered to use queueing and
> simultaneous dispatch to deal with spikes in load up to a certain margin. Say
> you know it can deal with spikes in load of up to 2x the regular rate.

I know it can, just that the 3x spike lasts for 6 hours :P

> Then
> you can deal with service outage of up to the 200ms deadline. If you can deal
> with spikes of up to 4x the regular rate then you can deal with an outage of
> up to 600ms. 

Small local fluctuations happen all the time. As a rule of a thumb I
want to stay below 50% of resource usage on average for any noticable
period and will start looking for code optimisations or additional
hardware if this is crossed.

> Moreover even if you had the extra resources available to handle a 19s backlog
> of requests, how long would it take you to clear that backlog? If you have a
> narrow headroom on meeting the deadline in the first place, and now you have
> even less headroom because of the resources dedicated to the queue, it'll take
> you a long time to clear the backlog.

While it feels heroic to run at 90% capacity, it is not usually a good
policy. All kinds of unforeseen stuff happens all the time -
checkpoints, backups, vacuums, unexpected growth, system cronjobs, ...
With too little headroom you are screwed anyway.

What I am aiming at with this CONCURRENT CREATE INDEX proposal, is being
no more disruptive than other stuff that keeps happening anyway. That
would be the baseline. Anything better is definitely desirable, but
should not be a stopper for implementing the baseline functionality.

-----------------
Hannu









В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Lamar Owen"
Дата:
Сообщение: Re: [pgsql-www] About my new work at Command Prompt Inc.
Следующее
От: Andrew Sullivan
Дата:
Сообщение: Re: Replication on the backend