Re: New GUC to sample log queries

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: New GUC to sample log queries
Дата
Msg-id CAKJS1f9T6QfJSWvVzYJOBV+AozTh3Kgtk2-YmAga6qLNcGfz2g@mail.gmail.com
обсуждение исходный текст
Ответ на New GUC to sample log queries  (Adrien Nayrat <adrien.nayrat@anayrat.info>)
Ответы Re: New GUC to sample log queries
Список pgsql-hackers
On 31 May 2018 at 06:44, Adrien Nayrat <adrien.nayrat@anayrat.info> wrote:
> Here is a naive SELECT only bench with a dataset which fit in ram (scale factor
> = 100) and PGDATA and log on a ramdisk:
> shared_buffers = 4GB
> seq_page_cost = random_page_cost = 1.0
> logging_collector = on (no rotation)

It would be better to just: SELECT 1; to try to get the true overhead
of the additional logging code.

> I don't know the cost of random() call?

It's probably best to test in Postgres to see if there's an overhead
to the new code.  It may be worth special casing the 0 and 1 case so
random() is not called.

+    (random() < log_sample_rate * MAX_RANDOM_VALUE);

this should be <=, or you'll randomly miss logging a query when
log_sample_rate is 1.0 every 4 billion or so queries.

Of course, it would be better if we had a proper profiler, but I can
see your need for this. Enabling logging of all queries in production
is currently reserved for people with low traffic servers and the
insane.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Thomas Munro
Дата:
Сообщение: Re: Undo logs
Следующее
От: Carter Thaxton
Дата:
Сообщение: Re: Add --include-table-data-where option to pg_dump, to export onlya subset of table data