Re: Performance

Поиск
Список
Период
Сортировка
От Claudio Freire
Тема Re: Performance
Дата
Msg-id BANLkTimkNdOzx19uCkqrQaUQ4wEz+vSqnA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Performance  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Performance
Список pgsql-performance
On Wed, Apr 27, 2011 at 10:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> What if the user is using an SSD or ramdisk?
>
> Admittedly, in many cases, we could probably get somewhat useful
> numbers this way.  But I think it would be pretty expensive.
> gettimeofday() is one of the reasons why running EXPLAIN ANALYZE on a
> query is significantly slower than just running it normally.  I bet if
> we put such calls around every read() and write(), it would cause a
> BIG slowdown for workloads that don't fit in shared_buffers.

I've just been reading an article about something intimately related
with that in ACM.

The article was about cache-conscious scheduling. Mostly memory cache,
but disk cache isn't that different. There are lots of work, real,
serious work in characterizing cache contention, and the article
showed how a simplified version of the cache reuse profile model
behaves under various workloads.

The simplified model simply used cache miss rates, and it performed
even better than the more complex model - they went on and analyzed
why.

Long story short, there is indeed a lot of literature about the
subject, there is a lot of formal and experimental results. One of
those models have to be embodied into a patch, and tested - that's
about it.

The patch may be simple, the testing not so much. I know that.

What tools do we have to do that testing? There are lots, and all
imply a lot of work. Is that work worth the trouble? Because if it
is... why not work?

I would propose a step in the right direction: a patch to compute and
log periodical estimations of the main I/O tunables: random_page_cost,
sequential_page_cost and effective_cache_size. Maybe per-tablespace.
Evaluate the performance impact, and work from there.

Because, probably just using those values as input to the optimizer
won't work, because dbas will want a way to tune the optimizer,
because the system may not be stable enough, even because even with
accurate estimates for those values, the optimizer may not perform as
expected. I mean, right now those values are tunables, not real
metrics, so perhaps the optimizer won't respond well to real values.

But having the ability to measure them without a serious performance
impact is a step in the right direction, right?

В списке pgsql-performance по дате отправления:

Предыдущее
От: Tomas Vondra
Дата:
Сообщение: Re: Performance
Следующее
От: Joseph Shraibman
Дата:
Сообщение: Re: index usage on queries on inherited tables