Re: Proposal: Adding json logging

Поиск
Список
Период
Сортировка
От David Arnold
Тема Re: Proposal: Adding json logging
Дата
Msg-id CAH6vsW+9844bPO5+QU1fTjXaMQYujGOAv=CNU6gmAuvhTHdi8g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Proposal: Adding json logging  (John W Higgins <wishdev@gmail.com>)
Ответы Re: Proposal: Adding json logging  (Christophe Pettus <xof@thebuild.com>)
Список pgsql-hackers
>Have you asked that question? You seem to at least have opened the source code - did you try to figure out what the logging format is?
1. -> No. 2. -> Yes.

I might be wrong, but something in my head tells me to have them seen the other way round. Unfortunately, I'm not experienced enough to be able to tell from the code and execution context if that guarantee exists. Nothing about such guarantees in the docs neither [1]. I let myself guide by serverfault questions like this one [2]. The existence of a log id on the standard format (if I got that one right) enticed me to think such guarantees do NOT exist.

Other than that, still this is not "compliant" with one-event-one-line semantics and forces me to use tailing logs [3] from file system, so no docker logging driver at all to that end with all it's implications on provisioning log aggregation in my clusters, in general. Just to make it clear: That's primarily my problem and not the problem of postgres in general. But it doesn't help neither.

>What does JSON logging have to do with "event by event" streaming?
It's a commonly chosen option in that problem space, just as it's alternative logfmt, but has no direct causality chain, more like in "wisdom of the multitude" (which can be terribly wrong at times). As said before, CSV without newlines would be equally a "fix" to my problem.

>Docker also lists Fluent as a standard driver for logging [1] - along with syslog and a variety of others. It's outstanding that JSON is their default - but they seem perfectly happy to accomodate plenty of other options. I don't seem how there is a conflict here. 
>I was also under the impression things like Fluent existed for the sole purpose of taking disparate logging solutions and bringing them under one roof - it would seem like it wants nothing more than for PostgreSQL to do as it pleases with the logs and they will pick it up and run with it.
>If there is an actual ambiguity with how logs are produced - I'm certain plenty of folks on here would like to solve that issue immediately. But I don't see anything stopping Docker/Fluent from using what is currently on the table.

While this describes well my intended take on log aggregation, multi line is breaking things (or at least making them a lot more complicated than they need to be) at the parsing stage as in [4].

Do you think there is a chance of an alternative solution to the exposed problem? I'm happy to dig further.
json/logfmt still seems a promising option, thinking the problem from it's end. For now I would define the problem like so:

Core-Problem: "Multi line logs are unnecessarily inconvenient to parse and are not compatible with the design of some (commonly used) logging aggregation flows."
2nd-order Problem: "Logging space increasingly moves towards the adoption of structured logging formats around json/logfmt. Compatibly options (plural!) with main stream (not necessarily standard) tooling is a value proposition of it's own kind. It helps increase odds of responsible deployments and improves the overall experience in adopting PostgreSQL."

Best, David


El dom., 15 abr. 2018 a las 13:29, John W Higgins (<wishdev@gmail.com>) escribió:
On Sun, Apr 15, 2018 at 11:08 AM, David Arnold <dar@xoe.solutions> wrote:
>This would appear to solve multiline issues within Fluent.....
>https://docs.fluentd.org/v0.12/articles/parser_multiline

I definitely looked at that, but what guarantees do I have that the sequence is always ERROR/STATEMENT/DETAIL? And not the other way round?

Have you asked that question? You seem to at least have opened the source code - did you try to figure out what the logging format is?
 
And it only works with tail logging from log file so I cannot use a native docker logging driver which streams event by event.
This again prohibits me the usage of host-global docker logging driver configuration as my standard option for host provisioning.

What does JSON logging have to do with "event by event" streaming?

Docker also lists Fluent as a standard driver for logging [1] - along with syslog and a variety of others. It's outstanding that JSON is their default - but they seem perfectly happy to accomodate plenty of other options. I don't seem how there is a conflict here.

I was also under the impression things like Fluent existed for the sole purpose of taking disparate logging solutions and bringing them under one roof - it would seem like it wants nothing more than for PostgreSQL to do as it pleases with the logs and they will pick it up and run with it.

If there is an actual ambiguity with how logs are produced - I'm certain plenty of folks on here would like to solve that issue immediately. But I don't see anything stopping Docker/Fluent from using what is currently on the table.

--
XOE SolutionsDAVID ARNOLD
Gerente General
xoe.solutions
dar@xoe.solutions
+57 (315) 304 13 68
Confidentiality Note: This email may contain confidential and/or private information. If you received this email in error please delete and notify sender.
Environmental Consideration: Please avoid printing this email on paper, unless really necessary.

В списке pgsql-hackers по дате отправления:

Предыдущее
От: John W Higgins
Дата:
Сообщение: Re: Proposal: Adding json logging
Следующее
От: Devrim Gündüz
Дата:
Сообщение: Re: submake-errcodes