Re: PoC: history of recent vacuum/checkpoint runs (using new hooks)
От | Tomas Vondra |
---|---|
Тема | Re: PoC: history of recent vacuum/checkpoint runs (using new hooks) |
Дата | |
Msg-id | 25dce92f-43f4-42b3-8370-313e4be7796b@vondra.me обсуждение исходный текст |
Ответ на | Re: PoC: history of recent vacuum/checkpoint runs (using new hooks) (Robert Treat <rob@xzilla.net>) |
Ответы |
Re: PoC: history of recent vacuum/checkpoint runs (using new hooks)
|
Список | pgsql-hackers |
On 1/7/25 21:42, Robert Treat wrote: > On Tue, Jan 7, 2025 at 10:44 AM Bertrand Drouvot > <bertranddrouvot.pg@gmail.com> wrote: >> >> ... >> >> Another idea regarding the storage of those metrics: I think that one would >> want to see "precise" data for recent metrics but would probably be fine with some >> level of aggregation for historical ones. Something like being able to retrieve >> "1 day of raw data" and say one year of data aggregated by day (average, maximum, >> minimum , standard deviation and maybe some percentiles) could be fine too. >> > > While I'm sure some people are ok with it, I would say that most of > the observability/metrics community has moved away from aggregated > data storage towards raw time series data in tools like prometheus, > tsdb, and timescale in order to avoid the problems that misleading / > lossy / low-resolution data can create. > That's how I see it too. My primary goal is to provide the raw data, even if it covers only a limited amount of time, so that it can be either queried directly, or ingested regularly into something like prometheus. I can imagine a more complicated system, aggregating the data after into a lower resolution (e.g. per day). But that's not a complete solution, because e.g. what if there are many relations that happen to be related only once per day? regards -- Tomas Vondra
В списке pgsql-hackers по дате отправления: