Обсуждение: csvlog gets crazy when csv file is not writable

Поиск
Список
Период
Сортировка

csvlog gets crazy when csv file is not writable

От
Alexander Kukushkin
Дата:
Hi,

I am using the following config parameters for logging:
log_destination = 'csvlog'
logging_collector = on
log_filename = 'postgresql-%Y-%m-%d.log'

If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file
for writing, it gets mad and outputs a few thousands of lines to
stderr:

2018-08-20 15:40:46.919 CEST [22067] HINT:  Future log output will
appear in directory "log".
2018-08-20 15:40:46.920 CEST [22069] FATAL:  could not open log file
"log/postgresql-2018-08-20.csv": Permission denied
2018-08-20 15:40:46.920 CEST [22069] FATAL:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] FATAL:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] FATAL:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] FATAL:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] PANIC:  ERRORDATA_STACK_SIZE exceeded
2018-08-20 15:40:46.920 CEST [22069] PANIC:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] PANIC:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] PANIC:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] PANIC:  could not open log file
"could not open log file "%s": Permission denied": Permission denied
2018-08-20 15:40:46.920 CEST [22069] PANIC:  ERRORDATA_STACK_SIZE exceeded

And so on. ERRORDATA_STACK_SIZE is presented in the output 3963 times

Sure, it is entirely my fault, that csv file is not writable, but such
avalanche of PANIC lines is really scary.

We tried to debug it a little bit and here is the backtrace:
#0 logfile_open (filename=0x555556011ec8
"log/postgresql-2018-08-20.csv", mode=<optimized out>,
allow_errors=<optimized out>) at syslogger.c:1149
#1 0x00005555558de0fe in open_csvlogfile () at syslogger.c:1105
#2 0x00005555558de19e in write_syslogger_file (buffer=0x5555560122e0
"2018-08-20 15:51:41.292 CEST,,,20696,,5b7ac76b.50d8,3,,2018-08-20
15:51:39 CEST,,0,FATAL,42501,\"could not open log file \"\"could not
open log file \"\"%s\"\": Permission denied\"\": Permission
denied\",,,,,,,"..., count=205, destination=destination@entry=8) at
syslogger.c:1002
#3 0x0000555555a8f461 in write_csvlog
(edata=edata@entry=0x555555fdad30 <errordata+368>) at elog.c:2837
#4 0x0000555555a8f8c7 in send_message_to_server_log
(edata=edata@entry=0x555555fdad30 <errordata+368>) at elog.c:3055
#5 0x0000555555a8fec2 in EmitErrorReport () at elog.c:1479
#6 0x0000555555a9000a in errfinish (dummy=<optimized out>) at elog.c:483
#7 0x00005555558dda8d in logfile_open (filename=<optimized out>,
mode=<optimized out>, allow_errors=<optimized out>) at
syslogger.c:1149
#8 0x00005555558de0fe in open_csvlogfile () at syslogger.c:1105
#9 0x00005555558de19e in write_syslogger_file (buffer=0x55555603b740
'\177' <repeats 200 times>..., count=2052,
destination=destination@entry=8) at syslogger.c:1002
#10 0x0000555555a8f461 in write_csvlog
(edata=edata@entry=0x555555fdac78 <errordata+184>) at elog.c:2837
#11 0x0000555555a8f8c7 in send_message_to_server_log
(edata=edata@entry=0x555555fdac78 <errordata+184>) at elog.c:3055
#12 0x0000555555a8fec2 in EmitErrorReport () at elog.c:1479
#13 0x0000555555a9000a in errfinish (dummy=<optimized out>) at elog.c:483
#14 0x00005555558dda8d in logfile_open (filename=<optimized out>,
mode=<optimized out>, allow_errors=<optimized out>) at
syslogger.c:1149
#15 0x00005555558de0fe in open_csvlogfile () at syslogger.c:1105
#16 0x00005555558de19e in write_syslogger_file (buffer=0x555556011ec8
"log/postgresql-2018-08-20.csv", count=185,
destination=destination@entry=8) at syslogger.c:1002
#17 0x0000555555a8f461 in write_csvlog
(edata=edata@entry=0x555555fdabc0 <errordata>) at elog.c:2837
#18 0x0000555555a8f8c7 in send_message_to_server_log
(edata=edata@entry=0x555555fdabc0 <errordata>) at elog.c:3055
#19 0x0000555555a8fec2 in EmitErrorReport () at elog.c:1479
#20 0x0000555555a9000a in errfinish (dummy=<optimized out>) at elog.c:483
#21 0x00005555558dda8d in logfile_open (filename=<optimized out>,
mode=<optimized out>, allow_errors=<optimized out>) at
syslogger.c:1149
#22 0x00005555558de0fe in open_csvlogfile () at syslogger.c:1105
#23 0x00005555558de19e in write_syslogger_file (buffer=0x7fffffffc48c
"2018-08-20 15:51:39.187 CEST,,,20669,,5b7ac76b.50bd,1,,2018-08-20
15:51:39 CEST,,0,LOG,00000,\"ending log output to stderr\",,\"Future
log output will go to log destination \"\"csvlog\"\".\",,,,,,,\"\"\n",
count=192, destination=8) at syslogger.c:1002
#24 0x00005555558de3d0 in process_pipe_input (logbuffer=0x7fffffffc3d0
"", bytes_in_logbuffer=0x7fffffffc3cc) at syslogger.c:898
#25 0x00005555558dea7c in SysLoggerMain (argc=<optimized out>,
argv=<optimized out>) at syslogger.c:458
#26 0x00005555558decdb in SysLogger_Start () at syslogger.c:613
#27 0x00005555558dd114 in PostmasterMain (argc=3, argv=<optimized
out>) at postmaster.c:1281
#28 0x000055555583b3da in main (argc=3, argv=0x55555600fcf0) at main.c:228

Regards,
--
Alexander Kukushkin


Re: csvlog gets crazy when csv file is not writable

От
Michael Paquier
Дата:
On Mon, Aug 20, 2018 at 03:55:01PM +0200, Alexander Kukushkin wrote:
> If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file
> for writing, it gets mad and outputs a few thousands of lines to
> stderr:
>
> 2018-08-20 15:40:46.920 CEST [22069] PANIC:  could not open log file

Ah, this log message could be changed to be simply "could not open
file", the file name offers enough context...

> And so on. ERRORDATA_STACK_SIZE is presented in the output 3963 times
>
> Sure, it is entirely my fault, that csv file is not writable, but such
> avalanche of PANIC lines is really scary.

Yeah, this is a recursion in logfile_open -> open_csvlogfile.  With
stderr there is a much better effort, where the server just quits with a
FATAL if the log file cannot be opened in SysLogger_Start.  Could this
be an argument for allowing logfile_open() to use write_stderr?  I am
not sure under the hood of the don't-do-that rule.  And we make sure
that log_destination is writable already at early stage, which would
cover any scenarios like a kernel switching the log partition to be
read-only.
--
Michael

Вложения

Re: csvlog gets crazy when csv file is not writable

От
Dmitry Dolgov
Дата:
> On Tue, 21 Aug 2018 at 04:23, Michael Paquier <michael@paquier.xyz> wrote:
> > On Mon, Aug 20, 2018 at 03:55:01PM +0200, Alexander Kukushkin wrote:
> > If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file
> > for writing, it gets mad and outputs a few thousands of lines to
> > stderr:
> >
> > 2018-08-20 15:40:46.920 CEST [22069] PANIC:  could not open log file
>
> Ah, this log message could be changed to be simply "could not open
> file", the file name offers enough context...

Yeah, I had the same idea while investigating this, that it would be easier
just to change this log message and remove the filename completely or format
the message somehow without using va_list.

> > And so on. ERRORDATA_STACK_SIZE is presented in the output 3963 times
> >
> > Sure, it is entirely my fault, that csv file is not writable, but such
> > avalanche of PANIC lines is really scary.
>
> Yeah, this is a recursion in logfile_open -> open_csvlogfile.  With
> stderr there is a much better effort, where the server just quits with a
> FATAL if the log file cannot be opened in SysLogger_Start.

I also wonder if there are any strong reasons why in the case of errors with
csv log file behaviour is different, maybe it's possible to unify them?

Re: csvlog gets crazy when csv file is not writable

От
Tom Lane
Дата:
Dmitry Dolgov <9erthalion6@gmail.com> writes:
>> On Tue, 21 Aug 2018 at 04:23, Michael Paquier <michael@paquier.xyz> wrote:
> On Mon, Aug 20, 2018 at 03:55:01PM +0200, Alexander Kukushkin wrote:
>>> If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file
>>> for writing, it gets mad and outputs a few thousands of lines to
>>> stderr:

>> Yeah, this is a recursion in logfile_open -> open_csvlogfile.  With
>> stderr there is a much better effort, where the server just quits with a
>> FATAL if the log file cannot be opened in SysLogger_Start.

> I also wonder if there are any strong reasons why in the case of errors with
> csv log file behaviour is different, maybe it's possible to unify them?

Yeah.  This seems to boil down to somebody deciding that it'd be okay
to delay opening the csvlogFile until we actually need to write to it.
As this example shows, that's just crazy, because it neglects to account
for the possibility of failure.  What we need to do is, first, make the
csvlogFile handling look as much as possible like the well-tested
syslogFile handling, and second, decide what we'll do about open failures
so that they *don't* lead to recursion.

At postmaster startup, we can open csvlogFile if requested and fail if
we can't, just as for syslogFile.  And if we fail to make a rotation,
it's okay to keep writing to the old file, just as for syslogFile.
But we have to allow for Log_destination to change after startup,
and that means we might need to open csvlogFile when it wasn't open
before, and we can't just quit if we fail.

What the attached patch does about that is to redirect CSV-format
output to syslogFile if we get CSV output when csvlogFile isn't open.
The only other plausible answer I can see is to drop such output on
the floor, and that seems pretty unfriendly.

Also we really ought to close csvlogFile if CSV logging is turned off;
the existing code doesn't, but this adds that.

Also, while looking at this I noticed what seems to be pretty dodgy code
in write_stderr:

void
write_stderr(const char *fmt,...)
{
    va_list        ap;

    fmt = _(fmt);

    va_start(ap, fmt);

I'm not sure how, or if, va_start actually uses its second argument;
it seems likely that that's vestigial in all modern compilers.  But
on a machine where it's *not* vestigial, assigning a new value to fmt
before we give it to va_start doesn't seem like a bright idea.
I think it'd be better to avoid that and just write _(fmt) twice,
as attached.

            regards, tom lane

diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index 58b759f..92cd33b 100644
*** a/src/backend/postmaster/syslogger.c
--- b/src/backend/postmaster/syslogger.c
*************** static void syslogger_parseArgs(int argc
*** 135,141 ****
  NON_EXEC_STATIC void SysLoggerMain(int argc, char *argv[]) pg_attribute_noreturn();
  static void process_pipe_input(char *logbuffer, int *bytes_in_logbuffer);
  static void flush_pipe_input(char *logbuffer, int *bytes_in_logbuffer);
- static void open_csvlogfile(void);
  static FILE *logfile_open(const char *filename, const char *mode,
               bool allow_errors);

--- 135,140 ----
*************** SysLoggerMain(int argc, char *argv[])
*** 272,282 ****
  #endif                            /* WIN32 */

      /*
!      * Remember active logfile's name.  We recompute this from the reference
       * time because passing down just the pg_time_t is a lot cheaper than
       * passing a whole file path in the EXEC_BACKEND case.
       */
      last_file_name = logfile_getname(first_syslogger_file_time, NULL);

      /* remember active logfile parameters */
      currentLogDir = pstrdup(Log_directory);
--- 271,283 ----
  #endif                            /* WIN32 */

      /*
!      * Remember active logfiles' name(s).  We recompute 'em from the reference
       * time because passing down just the pg_time_t is a lot cheaper than
       * passing a whole file path in the EXEC_BACKEND case.
       */
      last_file_name = logfile_getname(first_syslogger_file_time, NULL);
+     if (csvlogFile != NULL)
+         last_csv_file_name = logfile_getname(first_syslogger_file_time, ".csv");

      /* remember active logfile parameters */
      currentLogDir = pstrdup(Log_directory);
*************** SysLoggerMain(int argc, char *argv[])
*** 333,338 ****
--- 334,347 ----
              }

              /*
+              * Force a rotation if CSVLOG output was just turned on or off and
+              * we need to open or close csvlogFile accordingly.
+              */
+             if (((Log_destination & LOG_DESTINATION_CSVLOG) != 0) !=
+                 (csvlogFile != NULL))
+                 rotation_requested = true;
+
+             /*
               * If rotation time parameter changed, reset next rotation time,
               * but don't immediately force a rotation.
               */
*************** SysLogger_Start(void)
*** 580,591 ****
--- 589,615 ----
       * a time-based rotation.
       */
      first_syslogger_file_time = time(NULL);
+
      filename = logfile_getname(first_syslogger_file_time, NULL);

      syslogFile = logfile_open(filename, "a", false);

      pfree(filename);

+     /*
+      * Likewise for the initial CSV log file, if that's enabled.  (Note that
+      * we open syslogFile even when only CSV output is nominally enabled,
+      * since some code paths will write to syslogFile anyway.)
+      */
+     if (Log_destination & LOG_DESTINATION_CSVLOG)
+     {
+         filename = logfile_getname(first_syslogger_file_time, ".csv");
+
+         csvlogFile = logfile_open(filename, "a", false);
+
+         pfree(filename);
+     }
+
  #ifdef EXEC_BACKEND
      switch ((sysloggerPid = syslogger_forkexec()))
  #else
*************** SysLogger_Start(void)
*** 675,683 ****
                  redirection_done = true;
              }

!             /* postmaster will never write the file; close it */
              fclose(syslogFile);
              syslogFile = NULL;
              return (int) sysloggerPid;
      }

--- 699,712 ----
                  redirection_done = true;
              }

!             /* postmaster will never write the file(s); close 'em */
              fclose(syslogFile);
              syslogFile = NULL;
+             if (csvlogFile != NULL)
+             {
+                 fclose(csvlogFile);
+                 csvlogFile = NULL;
+             }
              return (int) sysloggerPid;
      }

*************** syslogger_forkexec(void)
*** 699,704 ****
--- 728,734 ----
      char       *av[10];
      int            ac = 0;
      char        filenobuf[32];
+     char        csvfilenobuf[32];

      av[ac++] = "postgres";
      av[ac++] = "--forklog";
*************** syslogger_forkexec(void)
*** 720,725 ****
--- 750,770 ----
  #endif                            /* WIN32 */
      av[ac++] = filenobuf;

+ #ifndef WIN32
+     if (csvlogFile != NULL)
+         snprintf(csvfilenobuf, sizeof(csvfilenobuf), "%d",
+                  fileno(csvlogFile));
+     else
+         strcpy(csvfilenobuf, "-1");
+ #else                            /* WIN32 */
+     if (csvlogFile != NULL)
+         snprintf(csvfilenobuf, sizeof(csvfilenobuf), "%ld",
+                  (long) _get_osfhandle(_fileno(csvlogFile)));
+     else
+         strcpy(csvfilenobuf, "0");
+ #endif                            /* WIN32 */
+     av[ac++] = csvfilenobuf;
+
      av[ac] = NULL;
      Assert(ac < lengthof(av));

*************** syslogger_parseArgs(int argc, char *argv
*** 736,744 ****
  {
      int            fd;

!     Assert(argc == 4);
      argv += 3;

  #ifndef WIN32
      fd = atoi(*argv++);
      if (fd != -1)
--- 781,796 ----
  {
      int            fd;

!     Assert(argc == 5);
      argv += 3;

+     /*
+      * Re-open the error output files that were opened by SysLogger_Start().
+      *
+      * We expect this will always succeed, which is too optimistic, but if it
+      * fails there's not a lot we can do to report the problem anyway.  As
+      * coded, we'll just crash on a null pointer dereference after failure...
+      */
  #ifndef WIN32
      fd = atoi(*argv++);
      if (fd != -1)
*************** syslogger_parseArgs(int argc, char *argv
*** 746,751 ****
--- 798,809 ----
          syslogFile = fdopen(fd, "a");
          setvbuf(syslogFile, NULL, PG_IOLBF, 0);
      }
+     fd = atoi(*argv++);
+     if (fd != -1)
+     {
+         csvlogFile = fdopen(fd, "a");
+         setvbuf(csvlogFile, NULL, PG_IOLBF, 0);
+     }
  #else                            /* WIN32 */
      fd = atoi(*argv++);
      if (fd != 0)
*************** syslogger_parseArgs(int argc, char *argv
*** 757,762 ****
--- 815,830 ----
              setvbuf(syslogFile, NULL, PG_IOLBF, 0);
          }
      }
+     fd = atoi(*argv++);
+     if (fd != 0)
+     {
+         fd = _open_osfhandle(fd, _O_APPEND | _O_TEXT);
+         if (fd > 0)
+         {
+             csvlogFile = fdopen(fd, "a");
+             setvbuf(csvlogFile, NULL, PG_IOLBF, 0);
+         }
+     }
  #endif                            /* WIN32 */
  }
  #endif                            /* EXEC_BACKEND */
*************** write_syslogger_file(const char *buffer,
*** 998,1010 ****
      int            rc;
      FILE       *logfile;

!     if (destination == LOG_DESTINATION_CSVLOG && csvlogFile == NULL)
!         open_csvlogfile();

-     logfile = destination == LOG_DESTINATION_CSVLOG ? csvlogFile : syslogFile;
      rc = fwrite(buffer, 1, count, logfile);

!     /* can't use ereport here because of possible recursion */
      if (rc != count)
          write_stderr("could not write to log file: %s\n", strerror(errno));
  }
--- 1066,1091 ----
      int            rc;
      FILE       *logfile;

!     /*
!      * If we're told to write to csvlogFile, but it's not open, dump the data
!      * to syslogFile (which is always open) instead.  This can happen if CSV
!      * output is enabled after postmaster start and we've been unable to open
!      * csvlogFile.  There are also race conditions during a parameter change
!      * whereby backends might send us CSV output before we open csvlogFile or
!      * after we close it.  Writing CSV-formatted output to the regular log
!      * file isn't great, but it beats dropping log output on the floor.
!      */
!     logfile = (destination == LOG_DESTINATION_CSVLOG &&
!                csvlogFile != NULL) ? csvlogFile : syslogFile;

      rc = fwrite(buffer, 1, count, logfile);

!     /*
!      * Try to report any failure.  We mustn't use ereport because it would
!      * just recurse right back here, but write_stderr is OK: it will write
!      * either to the postmaster's original stderr, or to /dev/null, but never
!      * to our input pipe which would result in a different sort of looping.
!      */
      if (rc != count)
          write_stderr("could not write to log file: %s\n", strerror(errno));
  }
*************** pipeThread(void *arg)
*** 1088,1118 ****
  #endif                            /* WIN32 */

  /*
-  * Open the csv log file - we do this opportunistically, because
-  * we don't know if CSV logging will be wanted.
-  *
-  * This is only used the first time we open the csv log in a given syslogger
-  * process, not during rotations.  As with opening the main log file, we
-  * always append in this situation.
-  */
- static void
- open_csvlogfile(void)
- {
-     char       *filename;
-
-     filename = logfile_getname(time(NULL), ".csv");
-
-     csvlogFile = logfile_open(filename, "a", false);
-
-     if (last_csv_file_name != NULL) /* probably shouldn't happen */
-         pfree(last_csv_file_name);
-
-     last_csv_file_name = filename;
-
-     update_metainfo_datafile();
- }
-
- /*
   * Open a new logfile with proper permissions and buffering options.
   *
   * If allow_errors is true, we just log any open failure and return NULL
--- 1169,1174 ----
*************** logfile_rotate(bool time_based_rotation,
*** 1179,1185 ****
      else
          fntime = time(NULL);
      filename = logfile_getname(fntime, NULL);
!     if (csvlogFile != NULL)
          csvfilename = logfile_getname(fntime, ".csv");

      /*
--- 1235,1241 ----
      else
          fntime = time(NULL);
      filename = logfile_getname(fntime, NULL);
!     if (Log_destination & LOG_DESTINATION_CSVLOG)
          csvfilename = logfile_getname(fntime, ".csv");

      /*
*************** logfile_rotate(bool time_based_rotation,
*** 1231,1240 ****
          filename = NULL;
      }

!     /* Same as above, but for csv file. */
!
!     if (csvlogFile != NULL &&
!         (time_based_rotation || (size_rotation_for & LOG_DESTINATION_CSVLOG)))
      {
          if (Log_truncate_on_rotation && time_based_rotation &&
              last_csv_file_name != NULL &&
--- 1287,1302 ----
          filename = NULL;
      }

!     /*
!      * Same as above, but for csv file.  Note that if LOG_DESTINATION_CSVLOG
!      * was just turned on, we might have to open csvlogFile here though it was
!      * not open before.  In such a case we'll append not overwrite (since
!      * last_csv_file_name will be NULL); that is consistent with the normal
!      * rules since it's not a time-based rotation.
!      */
!     if ((Log_destination & LOG_DESTINATION_CSVLOG) &&
!         (csvlogFile == NULL ||
!          time_based_rotation || (size_rotation_for & LOG_DESTINATION_CSVLOG)))
      {
          if (Log_truncate_on_rotation && time_based_rotation &&
              last_csv_file_name != NULL &&
*************** logfile_rotate(bool time_based_rotation,
*** 1265,1271 ****
              return;
          }

!         fclose(csvlogFile);
          csvlogFile = fh;

          /* instead of pfree'ing filename, remember it for next time */
--- 1327,1334 ----
              return;
          }

!         if (csvlogFile != NULL)
!             fclose(csvlogFile);
          csvlogFile = fh;

          /* instead of pfree'ing filename, remember it for next time */
*************** logfile_rotate(bool time_based_rotation,
*** 1274,1279 ****
--- 1337,1352 ----
          last_csv_file_name = csvfilename;
          csvfilename = NULL;
      }
+     else if (!(Log_destination & LOG_DESTINATION_CSVLOG) &&
+              csvlogFile != NULL)
+     {
+         /* CSVLOG was just turned off, so close the old file */
+         fclose(csvlogFile);
+         csvlogFile = NULL;
+         if (last_csv_file_name != NULL)
+             pfree(last_csv_file_name);
+         last_csv_file_name = NULL;
+     }

      if (filename)
          pfree(filename);
diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c
index 16531f7..eb09a91 100644
*** a/src/backend/utils/error/elog.c
--- b/src/backend/utils/error/elog.c
*************** void
*** 3676,3695 ****
  write_stderr(const char *fmt,...)
  {
      va_list        ap;
-
  #ifdef WIN32
      char        errbuf[2048];    /* Arbitrary size? */
  #endif

-     fmt = _(fmt);
-
      va_start(ap, fmt);
  #ifndef WIN32
      /* On Unix, we just fprintf to stderr */
!     vfprintf(stderr, fmt, ap);
      fflush(stderr);
  #else
!     vsnprintf(errbuf, sizeof(errbuf), fmt, ap);

      /*
       * On Win32, we print to stderr if running on a console, or write to
--- 3676,3692 ----
  write_stderr(const char *fmt,...)
  {
      va_list        ap;
  #ifdef WIN32
      char        errbuf[2048];    /* Arbitrary size? */
  #endif

      va_start(ap, fmt);
  #ifndef WIN32
      /* On Unix, we just fprintf to stderr */
!     vfprintf(stderr, _(fmt), ap);
      fflush(stderr);
  #else
!     vsnprintf(errbuf, sizeof(errbuf), _(fmt), ap);

      /*
       * On Win32, we print to stderr if running on a console, or write to

Re: csvlog gets crazy when csv file is not writable

От
Dmitry Dolgov
Дата:
> On Sun, 26 Aug 2018 at 00:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> At postmaster startup, we can open csvlogFile if requested and fail if
> we can't, just as for syslogFile.  And if we fail to make a rotation,
> it's okay to keep writing to the old file, just as for syslogFile.
> But we have to allow for Log_destination to change after startup,
> and that means we might need to open csvlogFile when it wasn't open
> before, and we can't just quit if we fail.

I agree, sounds great.

> What the attached patch does about that is to redirect CSV-format
> output to syslogFile if we get CSV output when csvlogFile isn't open.
> The only other plausible answer I can see is to drop such output on
> the floor, and that seems pretty unfriendly.

Yes, probably it's the only plausible answer for now. But then I would argue
that in the ideal world we will redirect output, that we can't write to
csvlogFile, to syslogFile in non-csv format, which will be less surprising. It
would require to move the actual csv formatting logic to the syslogger, or to
let backends to know somehow that csv log is not available at this moment
(although I assume both options are not particularly realistic).


Re: csvlog gets crazy when csv file is not writable

От
Tom Lane
Дата:
Dmitry Dolgov <9erthalion6@gmail.com> writes:
>> On Sun, 26 Aug 2018 at 00:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> What the attached patch does about that is to redirect CSV-format
>> output to syslogFile if we get CSV output when csvlogFile isn't open.
>> The only other plausible answer I can see is to drop such output on
>> the floor, and that seems pretty unfriendly.

> Yes, probably it's the only plausible answer for now. But then I would argue
> that in the ideal world we will redirect output, that we can't write to
> csvlogFile, to syslogFile in non-csv format, which will be less surprising. It
> would require to move the actual csv formatting logic to the syslogger, or to
> let backends to know somehow that csv log is not available at this moment
> (although I assume both options are not particularly realistic).

Yeah, something like that is what would have to happen.  The second option
is a complete nonstarter though: (a) there's no feedback path and (b) even
with feedback, there'd be race conditions.  It's conceivable that we could
push the formatting work down to the syslogger; but I think that would be
a net loss in both reliability and performance --- remember the syslogger
is a bottleneck for logging activities, since it's inherently not
parallel.

Given that this is such a corner-case scenario, and one that generally
implies a DBA screwup that needs to be fixed, I don't want to expend a
lot of work and potentially reduce performance in order to make it
cleaner.

            regards, tom lane