Обсуждение: Parallel pg_dump's error reporting doesn't work worth squat

Поиск
Список
Период
Сортировка

Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
I was in process of testing the proposed patch for bug #13727,
and I found that at least on my Linux box, this is the behavior
in the failure case without the patch:

$ pg_dump "postgres://postgres:phonypassword@localhost/regression" --jobs=9 -Fd -f testdump
$ echo $?
141
$ ls testdump
toc.dat

That is, the pg_dump process has crashed with a SIGPIPE without printing
any message whatsoever, and without coming anywhere near finishing the
dump.

A bit of investigation says that this is because somebody had the bright
idea that worker processes could report fatal errors back to the master
process instead of just printing them to stderr.  So when the workers
fail to establish connections (because of the password problem cited in
#13727), they don't tell me about that.  Oh no, they send those errors
back up to the pipe to the parent, and then die silently.  Meanwhile,
the parent is trying to send them commands, and since it doesn't protect
itself against SIGPIPE on the command pipes, it crashes without ever
reporting anything.  If you aren't paying close attention, you wouldn't
even realize you didn't get a completed dump.

Depending on timing, this scheme might accidentally fail to fail, but it
seems fragile as can be.  I would bet that it's prone to deadlocks, quite
aside from the SIGPIPE problem.  Considering how amazingly ugly the
underlying code is (exit_horribly is in parallel.c now? Really?), I want
to rip it out entirely, not try to band-aid it by suppressing SIGPIPE ---
though likely we need to do that too.

Thoughts?
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
"Joshua D. Drake"
Дата:
On 12/23/2015 10:16 AM, Tom Lane wrote:

> Depending on timing, this scheme might accidentally fail to fail, but it
> seems fragile as can be.  I would bet that it's prone to deadlocks, quite
> aside from the SIGPIPE problem.  Considering how amazingly ugly the
> underlying code is (exit_horribly is in parallel.c now? Really?), I want
> to rip it out entirely, not try to band-aid it by suppressing SIGPIPE ---
> though likely we need to do that too.
>
> Thoughts?

This is something that should work, period. It should be a showcase for 
the code we ship because it shows how serious we take data integrity 
(backups/restore etc...).

+1 for turning it into something irrefutably as close to perfect as 
humans can produce.

Sincerely,

JD


-- 
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
[ getting back to a complaint I made in December ]

I wrote:
> ... the pg_dump process has crashed with a SIGPIPE without printing
> any message whatsoever, and without coming anywhere near finishing the
> dump.

> A bit of investigation says that this is because somebody had the bright
> idea that worker processes could report fatal errors back to the master
> process instead of just printing them to stderr.  So when the workers
> fail to establish connections (because of the password problem cited in
> #13727), they don't tell me about that.  Oh no, they send those errors
> back up to the pipe to the parent, and then die silently.  Meanwhile,
> the parent is trying to send them commands, and since it doesn't protect
> itself against SIGPIPE on the command pipes, it crashes without ever
> reporting anything.  If you aren't paying close attention, you wouldn't
> even realize you didn't get a completed dump.

Attached is a proposed patch for this.  It reverts exit_horribly() to
what it used to be pre-9.3, ie just print the message on stderr and die.
The master now notices child failure by observing EOF on the status pipe.
While that works automatically on Unix, we have to explicitly close the
child side of the pipe on Windows (could someone check that that works?).
I also fixed the parent side to ignore SIGPIPE earlier, so that it won't
just die if it was in the midst of sending to the child.

BTW, after having spent the afternoon staring at it, I will assert with
confidence that pg_dump/parallel.c is an embarrassment to the project.
It is chock-full of misleading, out-of-date, and/or outright wrong
comments, useless or even wrong Asserts, ill-designed APIs, code that
works quite differently in Unix and Windows cases without a shred of
commentary about the fact, etc etc.  I have mostly resisted the temptation
to do cosmetic cleanup in the attached, but this code really needs some
editorial attention.

            regards, tom lane

diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 9167294..f650d3f 100644
*** a/src/bin/pg_dump/parallel.c
--- b/src/bin/pg_dump/parallel.c
*************** static ShutdownInformation shutdown_info
*** 77,84 ****
  static const char *modulename = gettext_noop("parallel archiver");

  static ParallelSlot *GetMyPSlot(ParallelState *pstate);
- static void parallel_msg_master(ParallelSlot *slot, const char *modulename,
-                     const char *fmt, va_list ap) pg_attribute_printf(3, 0);
  static void archive_close_connection(int code, void *arg);
  static void ShutdownWorkersHard(ParallelState *pstate);
  static void WaitForTerminatingWorkers(ParallelState *pstate);
--- 77,82 ----
*************** GetMyPSlot(ParallelState *pstate)
*** 163,227 ****
  }

  /*
-  * Fail and die, with a message to stderr.  Parameters as for write_msg.
-  *
-  * This is defined in parallel.c, because in parallel mode, things are more
-  * complicated. If the worker process does exit_horribly(), we forward its
-  * last words to the master process. The master process then does
-  * exit_horribly() with this error message itself and prints it normally.
-  * After printing the message, exit_horribly() on the master will shut down
-  * the remaining worker processes.
-  */
- void
- exit_horribly(const char *modulename, const char *fmt,...)
- {
-     va_list        ap;
-     ParallelState *pstate = shutdown_info.pstate;
-     ParallelSlot *slot;
-
-     va_start(ap, fmt);
-
-     if (pstate == NULL)
-     {
-         /* Not in parallel mode, just write to stderr */
-         vwrite_msg(modulename, fmt, ap);
-     }
-     else
-     {
-         slot = GetMyPSlot(pstate);
-
-         if (!slot)
-             /* We're the parent, just write the message out */
-             vwrite_msg(modulename, fmt, ap);
-         else
-             /* If we're a worker process, send the msg to the master process */
-             parallel_msg_master(slot, modulename, fmt, ap);
-     }
-
-     va_end(ap);
-
-     exit_nicely(1);
- }
-
- /* Sends the error message from the worker to the master process */
- static void
- parallel_msg_master(ParallelSlot *slot, const char *modulename,
-                     const char *fmt, va_list ap)
- {
-     char        buf[512];
-     int            pipefd[2];
-
-     pipefd[PIPE_READ] = slot->pipeRevRead;
-     pipefd[PIPE_WRITE] = slot->pipeRevWrite;
-
-     strcpy(buf, "ERROR ");
-     vsnprintf(buf + strlen("ERROR "),
-               sizeof(buf) - strlen("ERROR "), fmt, ap);
-
-     sendMessageToMaster(pipefd, buf);
- }
-
- /*
   * A thread-local version of getLocalPQExpBuffer().
   *
   * Non-reentrant but reduces memory leakage. (On Windows the memory leakage
--- 161,166 ----
*************** getThreadLocalPQExpBuffer(void)
*** 271,277 ****

  /*
   * pg_dump and pg_restore register the Archive pointer for the exit handler
!  * (called from exit_horribly). This function mainly exists so that we can
   * keep shutdown_info in file scope only.
   */
  void
--- 210,216 ----

  /*
   * pg_dump and pg_restore register the Archive pointer for the exit handler
!  * (called from exit_nicely). This function mainly exists so that we can
   * keep shutdown_info in file scope only.
   */
  void
*************** on_exit_close_archive(Archive *AHX)
*** 282,289 ****
  }

  /*
!  * This function can close archives in both the parallel and non-parallel
!  * case.
   */
  static void
  archive_close_connection(int code, void *arg)
--- 221,228 ----
  }

  /*
!  * on_exit_nicely handler for shutting down database connections and
!  * worker processes cleanly.
   */
  static void
  archive_close_connection(int code, void *arg)
*************** archive_close_connection(int code, void
*** 292,325 ****

      if (si->pstate)
      {
          ParallelSlot *slot = GetMyPSlot(si->pstate);

          if (!slot)
          {
              /*
!              * We're the master: We have already printed out the message
!              * passed to exit_horribly() either from the master itself or from
!              * a worker process. Now we need to close our own database
!              * connection (only open during parallel dump but not restore) and
!              * shut down the remaining workers.
               */
!             DisconnectDatabase(si->AHX);
  #ifndef WIN32

              /*
!              * Setting aborting to true switches to best-effort-mode
!              * (send/receive but ignore errors) in communicating with our
!              * workers.
               */
              aborting = true;
  #endif
              ShutdownWorkersHard(si->pstate);
          }
!         else if (slot->args->AH)
!             DisconnectDatabase(&(slot->args->AH->public));
      }
-     else if (si->AHX)
-         DisconnectDatabase(si->AHX);
  }

  /*
--- 231,283 ----

      if (si->pstate)
      {
+         /* In parallel mode, must figure out who we are */
          ParallelSlot *slot = GetMyPSlot(si->pstate);

          if (!slot)
          {
              /*
!              * We're the master.  Close our own database connection, if any,
!              * and then forcibly shut down workers.
               */
!             if (si->AHX)
!                 DisconnectDatabase(si->AHX);
!
  #ifndef WIN32

              /*
!              * Setting aborting to true shuts off error/warning messages that
!              * are no longer useful once we start killing workers.
               */
              aborting = true;
  #endif
              ShutdownWorkersHard(si->pstate);
          }
!         else
!         {
!             /*
!              * We're a worker.  Shut down our own DB connection if any.  On
!              * Windows, we also have to close our communication sockets, to
!              * emulate what will happen on Unix when the worker process exits.
!              * (Without this, if this is a premature exit, the master would
!              * fail to detect it because there would be no EOF condition on
!              * the other end of the pipe.)
!              */
!             if (slot->args->AH)
!                 DisconnectDatabase(&(slot->args->AH->public));
!
! #ifdef WIN32
!             closesocket(slot->pipeRevRead);
!             closesocket(slot->pipeRevWrite);
! #endif
!         }
!     }
!     else
!     {
!         /* Non-parallel operation: just kill the master DB connection */
!         if (si->AHX)
!             DisconnectDatabase(si->AHX);
      }
  }

  /*
*************** archive_close_connection(int code, void
*** 327,333 ****
   * threads to terminate as well (and not finish with their 70 GB table dump
   * first...). Now in UNIX we can just kill these processes, and let the signal
   * handler set wantAbort to 1. In Windows we set a termEvent and this serves
!  * as the signal for everyone to terminate.
   */
  void
  checkAborting(ArchiveHandle *AH)
--- 285,292 ----
   * threads to terminate as well (and not finish with their 70 GB table dump
   * first...). Now in UNIX we can just kill these processes, and let the signal
   * handler set wantAbort to 1. In Windows we set a termEvent and this serves
!  * as the signal for everyone to terminate.  We don't print any error message,
!  * that would just clutter the screen.
   */
  void
  checkAborting(ArchiveHandle *AH)
*************** checkAborting(ArchiveHandle *AH)
*** 337,343 ****
  #else
      if (wantAbort)
  #endif
!         exit_horribly(modulename, "worker is terminating\n");
  }

  /*
--- 296,302 ----
  #else
      if (wantAbort)
  #endif
!         exit_nicely(1);
  }

  /*
*************** ShutdownWorkersHard(ParallelState *pstat
*** 352,359 ****
  #ifndef WIN32
      int            i;

-     signal(SIGPIPE, SIG_IGN);
-
      /*
       * Close our write end of the sockets so that the workers know they can
       * exit.
--- 311,316 ----
*************** sigTermHandler(int signum)
*** 428,454 ****
  #endif

  /*
!  * This function is called by both UNIX and Windows variants to set up a
!  * worker process.
   */
  static void
  SetupWorker(ArchiveHandle *AH, int pipefd[2], int worker)
  {
      /*
       * Call the setup worker function that's defined in the ArchiveHandle.
-      *
-      * We get the raw connection only for the reason that we can close it
-      * properly when we shut down. This happens only that way when it is
-      * brought down because of an error.
       */
      (AH->SetupWorkerPtr) ((Archive *) AH);

      Assert(AH->connection != NULL);

      WaitForCommands(AH, pipefd);
-
-     closesocket(pipefd[PIPE_READ]);
-     closesocket(pipefd[PIPE_WRITE]);
  }

  #ifdef WIN32
--- 385,405 ----
  #endif

  /*
!  * This function is called by both UNIX and Windows variants to set up
!  * and run a worker process.  Caller should exit the process (or thread)
!  * upon return.
   */
  static void
  SetupWorker(ArchiveHandle *AH, int pipefd[2], int worker)
  {
      /*
       * Call the setup worker function that's defined in the ArchiveHandle.
       */
      (AH->SetupWorkerPtr) ((Archive *) AH);

      Assert(AH->connection != NULL);

      WaitForCommands(AH, pipefd);
  }

  #ifdef WIN32
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 534,547 ****
          pstate->parallelSlot[i].args = (ParallelArgs *) pg_malloc(sizeof(ParallelArgs));
          pstate->parallelSlot[i].args->AH = NULL;
          pstate->parallelSlot[i].args->te = NULL;
  #ifdef WIN32
          /* Allocate a new structure for every worker */
          wi = (WorkerInfo *) pg_malloc(sizeof(WorkerInfo));

          wi->worker = i;
          wi->AH = AH;
!         wi->pipeRead = pstate->parallelSlot[i].pipeRevRead = pipeMW[PIPE_READ];
!         wi->pipeWrite = pstate->parallelSlot[i].pipeRevWrite = pipeWM[PIPE_WRITE];

          handle = _beginthreadex(NULL, 0, (void *) &init_spawned_worker_win32,
                                  wi, 0, &(pstate->parallelSlot[i].threadId));
--- 485,506 ----
          pstate->parallelSlot[i].args = (ParallelArgs *) pg_malloc(sizeof(ParallelArgs));
          pstate->parallelSlot[i].args->AH = NULL;
          pstate->parallelSlot[i].args->te = NULL;
+
+         /* master's ends of the pipes */
+         pstate->parallelSlot[i].pipeRead = pipeWM[PIPE_READ];
+         pstate->parallelSlot[i].pipeWrite = pipeMW[PIPE_WRITE];
+         /* child's ends of the pipes */
+         pstate->parallelSlot[i].pipeRevRead = pipeMW[PIPE_READ];
+         pstate->parallelSlot[i].pipeRevWrite = pipeWM[PIPE_WRITE];
+
  #ifdef WIN32
          /* Allocate a new structure for every worker */
          wi = (WorkerInfo *) pg_malloc(sizeof(WorkerInfo));

          wi->worker = i;
          wi->AH = AH;
!         wi->pipeRead = pipeMW[PIPE_READ];
!         wi->pipeWrite = pipeWM[PIPE_WRITE];

          handle = _beginthreadex(NULL, 0, (void *) &init_spawned_worker_win32,
                                  wi, 0, &(pstate->parallelSlot[i].threadId));
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 557,571 ****
              pipefd[0] = pipeMW[PIPE_READ];
              pipefd[1] = pipeWM[PIPE_WRITE];

-             /*
-              * Store the fds for the reverse communication in pstate. Actually
-              * we only use this in case of an error and don't use pstate
-              * otherwise in the worker process. On Windows we write to the
-              * global pstate, in Unix we write to our process-local copy but
-              * that's also where we'd retrieve this information back from.
-              */
-             pstate->parallelSlot[i].pipeRevRead = pipefd[PIPE_READ];
-             pstate->parallelSlot[i].pipeRevWrite = pipefd[PIPE_WRITE];
              pstate->parallelSlot[i].pid = getpid();

              /*
--- 516,521 ----
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 584,590 ****

              /*
               * Close all inherited fds for communication of the master with
!              * the other workers.
               */
              for (j = 0; j < i; j++)
              {
--- 534,540 ----

              /*
               * Close all inherited fds for communication of the master with
!              * previously-forked workers.
               */
              for (j = 0; j < i; j++)
              {
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 612,622 ****

          pstate->parallelSlot[i].pid = pid;
  #endif
-
-         pstate->parallelSlot[i].pipeRead = pipeWM[PIPE_READ];
-         pstate->parallelSlot[i].pipeWrite = pipeMW[PIPE_WRITE];
      }

      return pstate;
  }

--- 562,577 ----

          pstate->parallelSlot[i].pid = pid;
  #endif
      }

+     /*
+      * Having forked off the workers, disable SIGPIPE so that master isn't
+      * killed if it tries to send a command to a dead worker.
+      */
+ #ifndef WIN32
+     signal(SIGPIPE, SIG_IGN);
+ #endif
+
      return pstate;
  }

*************** ListenToWorkers(ArchiveHandle *AH, Paral
*** 977,992 ****
          }
          else
              exit_horribly(modulename,
!                           "invalid message received from worker: %s\n", msg);
!     }
!     else if (messageStartsWith(msg, "ERROR "))
!     {
!         Assert(AH->format == archDirectory || AH->format == archCustom);
!         pstate->parallelSlot[worker].workerStatus = WRKR_TERMINATED;
!         exit_horribly(modulename, "%s", msg + strlen("ERROR "));
      }
      else
!         exit_horribly(modulename, "invalid message received from worker: %s\n", msg);

      /* both Unix and Win32 return pg_malloc()ed space, so we free it */
      free(msg);
--- 932,944 ----
          }
          else
              exit_horribly(modulename,
!                           "invalid message received from worker: \"%s\"\n",
!                           msg);
      }
      else
!         exit_horribly(modulename,
!                       "invalid message received from worker: \"%s\"\n",
!                       msg);

      /* both Unix and Win32 return pg_malloc()ed space, so we free it */
      free(msg);
diff --git a/src/bin/pg_dump/parallel.h b/src/bin/pg_dump/parallel.h
index 591653b..8d70428 100644
*** a/src/bin/pg_dump/parallel.h
--- b/src/bin/pg_dump/parallel.h
*************** typedef struct ParallelSlot
*** 42,50 ****
      ParallelArgs *args;
      T_WorkerStatus workerStatus;
      int            status;
!     int            pipeRead;
      int            pipeWrite;
!     int            pipeRevRead;
      int            pipeRevWrite;
  #ifdef WIN32
      uintptr_t    hThread;
--- 42,50 ----
      ParallelArgs *args;
      T_WorkerStatus workerStatus;
      int            status;
!     int            pipeRead;        /* master's end of the pipes */
      int            pipeWrite;
!     int            pipeRevRead;    /* child's end of the pipes */
      int            pipeRevWrite;
  #ifdef WIN32
      uintptr_t    hThread;
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 0aa13cd..5d1d875 100644
*** a/src/bin/pg_dump/pg_backup_utils.c
--- b/src/bin/pg_dump/pg_backup_utils.c
*************** vwrite_msg(const char *modulename, const
*** 93,98 ****
--- 93,115 ----
      vfprintf(stderr, _(fmt), ap);
  }

+ /*
+  * Fail and die, with a message to stderr.  Parameters as for write_msg.
+  *
+  * Note that on_exit_nicely callbacks will get run.
+  */
+ void
+ exit_horribly(const char *modulename, const char *fmt,...)
+ {
+     va_list        ap;
+
+     va_start(ap, fmt);
+     vwrite_msg(modulename, fmt, ap);
+     va_end(ap);
+
+     exit_nicely(1);
+ }
+
  /* Register a callback to be run when exit_nicely is invoked. */
  void
  on_exit_nicely(on_exit_nicely_callback function, void *arg)
*************** on_exit_nicely(on_exit_nicely_callback f
*** 106,112 ****

  /*
   * Run accumulated on_exit_nicely callbacks in reverse order and then exit
!  * quietly.  This needs to be thread-safe.
   */
  void
  exit_nicely(int code)
--- 123,142 ----

  /*
   * Run accumulated on_exit_nicely callbacks in reverse order and then exit
!  * without printing any message.
!  *
!  * If running in a parallel worker thread on Windows, we only exit the thread,
!  * not the whole process.
!  *
!  * Note that in parallel operation on Windows, the callback(s) will be run
!  * by each thread since the list state is necessarily shared by all threads;
!  * each callback must contain logic to ensure it does only what's appropriate
!  * for its thread.  On Unix, callbacks are also run by each process, but only
!  * for callbacks established before we fork off the child processes.  (It'd
!  * be cleaner to reset the list after fork(), and let each child establish
!  * its own callbacks; but then the behavior would be completely inconsistent
!  * between Windows and Unix.  For now, just be sure to establish callbacks
!  * before forking to avoid inconsistency.)
   */
  void
  exit_nicely(int code)

Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
I wrote:
>> ... the pg_dump process has crashed with a SIGPIPE without printing
>> any message whatsoever, and without coming anywhere near finishing the
>> dump.

> Attached is a proposed patch for this.  It reverts exit_horribly() to
> what it used to be pre-9.3, ie just print the message on stderr and die.
> The master now notices child failure by observing EOF on the status pipe.
> While that works automatically on Unix, we have to explicitly close the
> child side of the pipe on Windows (could someone check that that works?).
> I also fixed the parent side to ignore SIGPIPE earlier, so that it won't
> just die if it was in the midst of sending to the child.

Now that we're all back from PGCon ... does anyone wish to review this?
Or have an objection to treating it as a bug fix and patching all branches
back to 9.3?

> BTW, after having spent the afternoon staring at it, I will assert with
> confidence that pg_dump/parallel.c is an embarrassment to the project.

I've done a bit of work on a cosmetic cleanup patch, but don't have
anything to show yet.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
Hello,

# Note for convenience for others: The commit fixing the original
# issue is 1aa41e3eae3746e05d0e23286ac740a9a6cee7df.

At Mon, 23 May 2016 13:40:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <1336.1464025230@sss.pgh.pa.us>
> I wrote:
> >> ... the pg_dump process has crashed with a SIGPIPE without printing
> >> any message whatsoever, and without coming anywhere near finishing the
> >> dump.
> 
> > Attached is a proposed patch for this.  It reverts exit_horribly() to
> > what it used to be pre-9.3, ie just print the message on stderr and die.
> > The master now notices child failure by observing EOF on the status pipe.
> > While that works automatically on Unix, we have to explicitly close the
> > child side of the pipe on Windows (could someone check that that works?).
> > I also fixed the parent side to ignore SIGPIPE earlier, so that it won't
> > just die if it was in the midst of sending to the child.
> 
> Now that we're all back from PGCon ... does anyone wish to review this?
> Or have an objection to treating it as a bug fix and patching all branches
> back to 9.3?

FWIW, it seems to me a bug which should be fixed to back
branches.

I tried this only on Linux (I'll try it on Windows afterward) and
got something like this.

pg_dump: [archiver (db)] pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth: no password
supplied
connection to database "postgres" failed: fe_sendauth: no password supplied
pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth: no password supplied
<some repeats>
pg_dump: [parallel archiver] a worker process died unexpectedly
pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth: no password supplied
<some repeats>


Although the error messages from multiple children are cluttered,
it would be inevitable and better than vanishing.

It seems hard to distinguish the meaning among the module names
enclosed by '[]' but it is another issue.


In archive_close_connection, the following change means that
si->AHX could be NULL there, as the existing code for
non-parallel does. But it seems to be set once for
si(=shutdown_info)'s lifetime, before reaching there, to a valid
value.

-            DisconnectDatabase(si->AHX);
+            if (si->AHX)
+                DisconnectDatabase(si->AHX);

Shouldn't archive_close_connection have an assertion (si->AHX !=
NULL) instead of checking "if (si->AHX)" in each path?

> > BTW, after having spent the afternoon staring at it, I will assert with
> > confidence that pg_dump/parallel.c is an embarrassment to the project.
> 
> I've done a bit of work on a cosmetic cleanup patch, but don't have
> anything to show yet.
> 
>             regards, tom lane

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> Shouldn't archive_close_connection have an assertion (si->AHX !=
> NULL) instead of checking "if (si->AHX)" in each path?

I thought about that but didn't see any really good argument for it.
It'd be making that function dependent on the current behavior of
unrelated code, when it doesn't need to be.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Wed, 25 May 2016 00:15:57 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <3149.1464149757@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > Shouldn't archive_close_connection have an assertion (si->AHX !=
> > NULL) instead of checking "if (si->AHX)" in each path?
> 
> I thought about that but didn't see any really good argument for it.
> It'd be making that function dependent on the current behavior of
> unrelated code, when it doesn't need to be.

It's also fine with me.

I tried it on Windows 7/64 but first of all, I'm surprised to see
that the following command line gets an error but it would be
fine because it is consistent with what is written in the manual.

| >pg_dump postgres://horiguti:hoge@localhost/postgres --jobs=9 -Fd -f testdump
| pg_dump: too many command-line arguments (first is "--jobs=9")
| Try "pg_dump --help" for more information.


Next, I got the following behavior for the following command,
then freeze. Maybe stopping at the same point with the next
paragraph but I'm not sure. The same thing occurs this patch on
top of the current master but doesn't on Linux.

| >pg_dump -d postgres --jobs=9 -Fd -f testdump
| Password: <correct password>
| pg_dump: [archiver] WARNING: requested compression not available in this installation -- archive will be
uncompressed
| pg_dump: [compress_io] not built with zlib support
| pg_dump: [parallel archiver] a worker process died unexpectedly
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: "00002299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '00002299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: "00002299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '00002299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: "00002299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '00002299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: "00002299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '00002299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: "00002299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '00002299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: "00002299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '00002299-1'


Third, I'm not sure on this detail, but pg_dump shows the
following message then freeze until Ctrl-C. I think that I forgot
to set password to the user for the time. It doesn't seem to
occur for this patch on top of the current master.

| >pg_dump --jobs=9 -Fd -f testdump "postgres://horiguti:hoge@localhost/postgres"
| pg_dump: [archiver] WARNING: requested compression not available in this installation -- archive will be
uncompressed
| pg_dump: [compress_io] not built with zlib support
| pg_dump: [parallel archiver] a worker process died unexpectedly
| ^C
| >

The main thread was stopping at WaitForMultiplObjects() around
parallel.c:361(@master + this patch) but I haven't investigated
it any more.


Finally, I got the following expected result, which seems sane.

| >pg_dump --jobs=9 -Fd -f testdump "postgres://horiguti:hoge@localhost/postgres"
| pg_dump: [archiver] WARNING: requested compression not available in this installation -- archive will be
uncompressed
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [parallel archiver] a worker process died unexpectedly
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth:no password supplied


I might be wrong with something, but pg_dump seems to have some
issues in thread handling.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> I tried it on Windows 7/64 but first of all, I'm surprised to see
> that the following command line gets an error but it would be
> fine because it is consistent with what is written in the manual.

> | >pg_dump postgres://horiguti:hoge@localhost/postgres --jobs=9 -Fd -f testdump
> | pg_dump: too many command-line arguments (first is "--jobs=9")
> | Try "pg_dump --help" for more information.

Where do you see an example like that?  We should fix it.  The only case
that is certain to work is switches before non-switch arguments, and so
we should not give any documentation examples in the other order.  On a
platform using GNU getopt(), getopt() will rearrange the argv[] array to
make such cases work, but non-GNU getopt() doesn't do that (and I don't
really trust the GNU version not to get confused, either).

> I might be wrong with something, but pg_dump seems to have some
> issues in thread handling.

Wouldn't surprise me ... there's a lot of code in there depending on
static variables, and we've only tried to thread-proof a few.  Still,
I think that's a separate issue from what this patch is addressing.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Wed, 25 May 2016 10:11:28 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <24577.1464185488@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > I tried it on Windows 7/64 but first of all, I'm surprised to see
> > that the following command line gets an error but it would be
> > fine because it is consistent with what is written in the manual.
> 
> > | >pg_dump postgres://horiguti:hoge@localhost/postgres --jobs=9 -Fd -f testdump
> > | pg_dump: too many command-line arguments (first is "--jobs=9")
> > | Try "pg_dump --help" for more information.
> 
> Where do you see an example like that?  We should fix it.

Sorry for the confusing sentence. The command line came from your
first mail starting this thread:p And "consistent with manual"
means that the command line results in an error (even only on
Windows) since it is difrerent from the document. No such example
has been seen in the documentation AFAICS.

https://www.postgresql.org/message-id/2458.1450894615@sss.pgh.pa.us
https://www.postgresql.org/docs/9.6/static/app-pgdump.html

>  The only case
> that is certain to work is switches before non-switch arguments, and so
> we should not give any documentation examples in the other order.  On a
> platform using GNU getopt(), getopt() will rearrange the argv[] array to
> make such cases work, but non-GNU getopt() doesn't do that (and I don't
> really trust the GNU version not to get confused, either).

Yeah, I knew it after reading port/getopt_long.c. But the error
message seems saying nothing about that (at least to me). And
those accumstomed to the GNU getopt's behavior will fail to get
the point of the message. The documentation also doesn't
explicitly saying about the restriction; it is only implicitly
shown in synopsis. How about something like the following
message? (The patch attached looks too dependent on the specific
behavior of our getopt_long.c, but doing it in getopt_long.c also
seems to be wrong..)

| >pg_dump hoge -f
| pg_dump: non-option arguments should not precede options.


> > I might be wrong with something, but pg_dump seems to have some
> > issues in thread handling.
> 
> Wouldn't surprise me ... there's a lot of code in there depending on
> static variables, and we've only tried to thread-proof a few.  Still,
> I think that's a separate issue from what this patch is addressing.

Sounds reasonable. I look into this further.
I see no other functional problem in this patch.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1267afb..52e9094 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -546,9 +546,13 @@ main(int argc, char **argv)    /* Complain if any arguments remain */    if (optind < argc)    {
-        fprintf(stderr, _("%s: too many command-line arguments (first is \"%s\")\n"),
-                progname, argv[optind]);
-        fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+        if (optind > 0 && argv[optind - 1][0] != '-')
+            fprintf(stderr, _("%s: non-option arguments should not precede options.\n"),
+                    progname);
+        else
+            fprintf(stderr, _("%s: too many command-line arguments (first is \"%s\")\n"),
+                    progname, argv[optind]);
+        fprintf(stderr, _("Non-Try \"%s --help\" for more information.\n"),                progname);
exit_nicely(1);   } 

Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
> Sounds reasonable. I look into this further.

I looked into that and found one problem in the patch.

> Next, I got the following behavior for the following command,
> then freeze. Maybe stopping at the same point with the next
> paragraph but I'm not sure. The same thing occurs this patch on
> top of the current master but doesn't on Linux.

This occurs in the following steps.

1. One of the workers dies from some reason.  (InitCompressorZlib immediately goes into exit_horribly in this case)

2. The main thread detects in ListenToWorkers that the worker is dead.

3. ListenToWorkers calls exit_horribly then exit_nicely

4. exit_nicely calls archive_close_connection as a callback then  the callback calls ShutdownWorkersHard

5. ShutdownWorkersHard should close the write side of the pipe  but the patch skips it for WIN32.

So, the attached patch on top the patch fixes that, that is,
pg_dump returns to command prompt even for the case.

By the way, the reason of the "invalid snapshot identifier" is
that some worker threads try to use it after the connection on
the first worker closed. Some of the workers succesfully did
before the connection closing and remained listening to their
master to inhibit termination of the entire process.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index f650d3f..6c08426 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -308,7 +308,6 @@ checkAborting(ArchiveHandle *AH)static voidShutdownWorkersHard(ParallelState *pstate){
-#ifndef WIN32    int            i;    /*
@@ -318,6 +317,7 @@ ShutdownWorkersHard(ParallelState *pstate)    for (i = 0; i < pstate->numWorkers; i++)
closesocket(pstate->parallelSlot[i].pipeWrite);
+#ifndef WIN32    for (i = 0; i < pstate->numWorkers; i++)        kill(pstate->parallelSlot[i].pid, SIGTERM);#else

Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
>> Next, I got the following behavior for the following command,
>> then freeze. Maybe stopping at the same point with the next
>> paragraph but I'm not sure. The same thing occurs this patch on
>> top of the current master but doesn't on Linux.

> [ need to close command sockets in ShutdownWorkersHard ]

Hah.  I had made that same change in the cosmetic-cleanups patch I was
working on ... but I assumed it wasn't a live bug or we'd have heard
about it.

Pushed along with the comment improvements I'd made to that function.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> At Wed, 25 May 2016 10:11:28 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <24577.1464185488@sss.pgh.pa.us>
>> The only case
>> that is certain to work is switches before non-switch arguments, and so
>> we should not give any documentation examples in the other order.  On a
>> platform using GNU getopt(), getopt() will rearrange the argv[] array to
>> make such cases work, but non-GNU getopt() doesn't do that (and I don't
>> really trust the GNU version not to get confused, either).

> Yeah, I knew it after reading port/getopt_long.c. But the error
> message seems saying nothing about that (at least to me). And
> those accumstomed to the GNU getopt's behavior will fail to get
> the point of the message. The documentation also doesn't
> explicitly saying about the restriction; it is only implicitly
> shown in synopsis. How about something like the following
> message? (The patch attached looks too dependent on the specific
> behavior of our getopt_long.c, but doing it in getopt_long.c also
> seems to be wrong..)

It's not a bad idea to try to improve our response to this situation,
but I think this patch needs more work.  I wonder in particular why
you're not basing the variant error message on the first unprocessed
argument starting with '-', rather than looking at the word before it.
Another thought is that the message is fairly unhelpful because it does
not show where in the arguments things went wrong.  Maybe something
more like
if (argv[optind][0] == '-')    fprintf(stderr, _("%s: misplaced option \"%s\": options must appear before non-option
arguments\n"),       progname, argv[optind]);else    // existing message
 

In any case, if we wanted to do something about this scenario, we should
do it consistently across all our programs, not just pg_dump.  I count
25 appearances of that "too many command-line arguments" message...
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Thu, 26 May 2016 12:15:29 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <8273.1464279329@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > At Wed, 25 May 2016 10:11:28 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <24577.1464185488@sss.pgh.pa.us>
> >> The only case
> >> that is certain to work is switches before non-switch arguments, and so
> >> we should not give any documentation examples in the other order.  On a
> >> platform using GNU getopt(), getopt() will rearrange the argv[] array to
> >> make such cases work, but non-GNU getopt() doesn't do that (and I don't
> >> really trust the GNU version not to get confused, either).
> 
> > Yeah, I knew it after reading port/getopt_long.c. But the error
> > message seems saying nothing about that (at least to me). And
> > those accumstomed to the GNU getopt's behavior will fail to get
> > the point of the message. The documentation also doesn't
> > explicitly saying about the restriction; it is only implicitly
> > shown in synopsis. How about something like the following
> > message? (The patch attached looks too dependent on the specific
> > behavior of our getopt_long.c, but doing it in getopt_long.c also
> > seems to be wrong..)
> 
> It's not a bad idea to try to improve our response to this situation,
> but I think this patch needs more work.  I wonder in particular why
> you're not basing the variant error message on the first unprocessed
> argument starting with '-', rather than looking at the word before it.

Sorry, it just comes from my carelessness. It is correct to do
what you wrote as above. And it is also the cause of my obscure
error message.

> Another thought is that the message is fairly unhelpful because it does
> not show where in the arguments things went wrong.  Maybe something
> more like
> 
>     if (argv[optind][0] == '-')
>         fprintf(stderr, _("%s: misplaced option \"%s\": options must appear before non-option arguments\n"),
>             progname, argv[optind]);
>     else
>         // existing message
> 
> In any case, if we wanted to do something about this scenario, we should
> do it consistently across all our programs, not just pg_dump.  I count
> 25 appearances of that "too many command-line arguments" message...

Although I suppose no one has ever complained before about that,
and the same thing will occur on the other new programs even if
the programs are fixed now..

I'll consider more on it for some time..

Thank you for the suggestion.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Thu, 26 May 2016 10:53:47 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <5237.1464274427@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> >> Next, I got the following behavior for the following command,
> >> then freeze. Maybe stopping at the same point with the next
> >> paragraph but I'm not sure. The same thing occurs this patch on
> >> top of the current master but doesn't on Linux.
> 
> > [ need to close command sockets in ShutdownWorkersHard ]
> 
> Hah.  I had made that same change in the cosmetic-cleanups patch I was
> working on ... but I assumed it wasn't a live bug or we'd have heard
> about it.
> 
> Pushed along with the comment improvements I'd made to that function.

Thank you!

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> By the way, the reason of the "invalid snapshot identifier" is
> that some worker threads try to use it after the connection on
> the first worker closed.

... BTW, I don't quite see what the issue is there.  The snapshot is
exported from the master session, so errors in worker sessions should not
cause such failures in other workers.  And I don't see any such failure
when setting up a scenario that will cause a worker to fail on Linux.
The "invalid snapshot identifier" bleats would make sense if you had
gotten a server-side error (and transaction abort) in the master session,
but I don't see any evidence that that happened in that example.  Might be
worth seeing if that's reproducible.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
I wrote:
>> BTW, after having spent the afternoon staring at it, I will assert with
>> confidence that pg_dump/parallel.c is an embarrassment to the project.

> I've done a bit of work on a cosmetic cleanup patch, but don't have
> anything to show yet.

Attached is the threatened cosmetic cleanup of parallel.c.  As I went
through it, I found quite a few things not to like, but in this patch
I've mostly resisted the temptation to fix them immediately, and have
just tried to document what's there more accurately.

Aside from a major amount of comment-rewriting and a very small amount of
cosmetic code adjustment (mostly moving code for more clarity), this
patch changes these things:

* Rename SetupWorker() to RunWorker() to reflect what it actually does,
and remove its unused "worker" argument.

* Rename lockTableNoWait() to lockTableForWorker() for clarity, and move
the test for BLOBS into it instead of having an Assert that the caller
checked that.

* Don't bother with getThreadLocalPQExpBuffer() at all in non-Windows
builds; it was identical to getLocalPQExpBuffer() anyway, except for
being misleadingly named.

* Remove some completely-redundant or otherwise ill-considered Asserts.

* Fix incorrect "Assert(msgsize <= bufsize)" --- should be < bufsize.

* Fix missing socket close in one error exit from pgpipe().  This isn't
too exciting at present since we'll just curl up and die if it fails,
but might as well get it right.

I have some other, less-cosmetic, things I want to do here, but first
does anyone care to review this?

            regards, tom lane

diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index c656ba5..1a52fae 100644
*** a/src/bin/pg_dump/parallel.c
--- b/src/bin/pg_dump/parallel.c
***************
*** 2,21 ****
   *
   * parallel.c
   *
!  *    Parallel support for the pg_dump archiver
   *
   * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
   * Portions Copyright (c) 1994, Regents of the University of California
   *
-  *    The author is not responsible for loss or damages that may
-  *    result from its use.
-  *
   * IDENTIFICATION
   *        src/bin/pg_dump/parallel.c
   *
   *-------------------------------------------------------------------------
   */

  #include "postgres_fe.h"

  #include "parallel.h"
--- 2,62 ----
   *
   * parallel.c
   *
!  *    Parallel support for pg_dump and pg_restore
   *
   * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
   * Portions Copyright (c) 1994, Regents of the University of California
   *
   * IDENTIFICATION
   *        src/bin/pg_dump/parallel.c
   *
   *-------------------------------------------------------------------------
   */

+ /*
+  * Parallel operation works like this:
+  *
+  * The original, master process calls ParallelBackupStart(), which forks off
+  * the desired number of worker processes, which each enter WaitForCommands().
+  *
+  * The master process dispatches an individual work item to one of the worker
+  * processes in DispatchJobForTocEntry().  That calls
+  * AH->MasterStartParallelItemPtr, a routine of the output format.  This
+  * function's arguments are the parents archive handle AH (containing the full
+  * catalog information), the TocEntry that the worker should work on and a
+  * T_Action value indicating whether this is a backup or a restore task.  The
+  * function simply converts the TocEntry assignment into a command string that
+  * is then sent over to the worker process. In the simplest case that would be
+  * something like "DUMP 1234", with 1234 being the TocEntry id.
+  *
+  * The worker process receives and decodes the command and passes it to the
+  * routine pointed to by AH->WorkerJobDumpPtr or AH->WorkerJobRestorePtr,
+  * which are routines of the current archive format.  That routine performs
+  * the required action (dump or restore) and returns a malloc'd status string.
+  * The status string is passed back to the master where it is interpreted by
+  * AH->MasterEndParallelItemPtr, another format-specific routine.  That
+  * function can update state or catalog information on the master's side,
+  * depending on the reply from the worker process.  In the end it returns a
+  * status code, which is 0 for successful execution.
+  *
+  * Remember that we have forked off the workers only after we have read in
+  * the catalog.  That's why our worker processes can also access the catalog
+  * information.  (In the Windows case, the workers are threads in the same
+  * process.  To avoid problems, they work with cloned copies of the Archive
+  * data structure; see init_spawned_worker_win32().)
+  *
+  * In the master process, the workerStatus field for each worker has one of
+  * the following values:
+  *        WRKR_IDLE: it's waiting for a command
+  *        WRKR_WORKING: it's been sent a command
+  *        WRKR_FINISHED: it's returned a result
+  *        WRKR_TERMINATED: process ended
+  * The FINISHED state indicates that the worker is idle, but we've not yet
+  * dealt with the status code it returned from the prior command.
+  * ReapWorkerStatus() extracts the unhandled command status value and sets
+  * the workerStatus back to WRKR_IDLE.
+  */
+
  #include "postgres_fe.h"

  #include "parallel.h"
***************
*** 30,44 ****
  #include <fcntl.h>
  #endif

  #define PIPE_READ                            0
  #define PIPE_WRITE                            1

- /* file-scope variables */
  #ifdef WIN32
- static unsigned int tMasterThreadId = 0;
- static HANDLE termEvent = INVALID_HANDLE_VALUE;
- static int    pgpipe(int handles[2]);
- static int    piperead(int s, char *buf, int len);

  /*
   * Structure to hold info passed by _beginthreadex() to the function it calls
--- 71,81 ----
  #include <fcntl.h>
  #endif

+ /* Mnemonic macros for indexing the fd array returned by pipe(2) */
  #define PIPE_READ                            0
  #define PIPE_WRITE                            1

  #ifdef WIN32

  /*
   * Structure to hold info passed by _beginthreadex() to the function it calls
*************** static int    piperead(int s, char *buf, in
*** 47,71 ****
  typedef struct
  {
      ArchiveHandle *AH;
-     int            worker;
      int            pipeRead;
      int            pipeWrite;
  } WorkerInfo;

  #define pipewrite(a,b,c)    send(a,b,c,0)
! #else
  /*
!  * aborting is only ever used in the master, the workers are fine with just
!  * wantAbort.
   */
  static bool aborting = false;
  static volatile sig_atomic_t wantAbort = 0;

  #define pgpipe(a)            pipe(a)
  #define piperead(a,b,c)        read(a,b,c)
  #define pipewrite(a,b,c)    write(a,b,c)
- #endif

  typedef struct ShutdownInformation
  {
      ParallelState *pstate;
--- 84,117 ----
  typedef struct
  {
      ArchiveHandle *AH;
      int            pipeRead;
      int            pipeWrite;
  } WorkerInfo;

+ /* Windows implementation of pipe access */
+ static int    pgpipe(int handles[2]);
+ static int    piperead(int s, char *buf, int len);
  #define pipewrite(a,b,c)    send(a,b,c,0)
!
! #else                            /* !WIN32 */
!
  /*
!  * Variables for handling signals.  aborting is only ever used in the master,
!  * the workers just need wantAbort.
   */
  static bool aborting = false;
  static volatile sig_atomic_t wantAbort = 0;

+ /* Non-Windows implementation of pipe access */
  #define pgpipe(a)            pipe(a)
  #define piperead(a,b,c)        read(a,b,c)
  #define pipewrite(a,b,c)    write(a,b,c)

+ #endif   /* WIN32 */
+
+ /*
+  * State info for archive_close_connection() shutdown callback.
+  */
  typedef struct ShutdownInformation
  {
      ParallelState *pstate;
*************** typedef struct ShutdownInformation
*** 74,93 ****

  static ShutdownInformation shutdown_info;

  static const char *modulename = gettext_noop("parallel archiver");

  static ParallelSlot *GetMyPSlot(ParallelState *pstate);
  static void archive_close_connection(int code, void *arg);
  static void ShutdownWorkersHard(ParallelState *pstate);
  static void WaitForTerminatingWorkers(ParallelState *pstate);
!
! #ifndef WIN32
! static void sigTermHandler(int signum);
! #endif
! static void SetupWorker(ArchiveHandle *AH, int pipefd[2], int worker);
  static bool HasEveryWorkerTerminated(ParallelState *pstate);
!
! static void lockTableNoWait(ArchiveHandle *AH, TocEntry *te);
  static void WaitForCommands(ArchiveHandle *AH, int pipefd[2]);
  static char *getMessageFromMaster(int pipefd[2]);
  static void sendMessageToMaster(int pipefd[2], const char *str);
--- 120,146 ----

  static ShutdownInformation shutdown_info;

+ #ifdef WIN32
+ /* file-scope variables */
+ static unsigned int tMasterThreadId = 0;
+ static HANDLE termEvent = INVALID_HANDLE_VALUE;
+ static DWORD tls_index;
+
+ /* globally visible variables (needed by exit_nicely) */
+ bool        parallel_init_done = false;
+ DWORD        mainThreadId;
+ #endif   /* WIN32 */
+
  static const char *modulename = gettext_noop("parallel archiver");

+ /* Local function prototypes */
  static ParallelSlot *GetMyPSlot(ParallelState *pstate);
  static void archive_close_connection(int code, void *arg);
  static void ShutdownWorkersHard(ParallelState *pstate);
  static void WaitForTerminatingWorkers(ParallelState *pstate);
! static void RunWorker(ArchiveHandle *AH, int pipefd[2]);
  static bool HasEveryWorkerTerminated(ParallelState *pstate);
! static void lockTableForWorker(ArchiveHandle *AH, TocEntry *te);
  static void WaitForCommands(ArchiveHandle *AH, int pipefd[2]);
  static char *getMessageFromMaster(int pipefd[2]);
  static void sendMessageToMaster(int pipefd[2], const char *str);
*************** static char *readMessageFromPipe(int fd)
*** 103,117 ****
  #define messageEquals(msg, pattern) \
      (strcmp(msg, pattern) == 0)

- #ifdef WIN32
- static void shutdown_parallel_dump_utils(int code, void *unused);
- bool        parallel_init_done = false;
- static DWORD tls_index;
- DWORD        mainThreadId;
- #endif
-

  #ifdef WIN32
  static void
  shutdown_parallel_dump_utils(int code, void *unused)
  {
--- 156,166 ----
  #define messageEquals(msg, pattern) \
      (strcmp(msg, pattern) == 0)


  #ifdef WIN32
+ /*
+  * Shutdown callback to clean up socket access
+  */
  static void
  shutdown_parallel_dump_utils(int code, void *unused)
  {
*************** shutdown_parallel_dump_utils(int code, v
*** 121,126 ****
--- 170,180 ----
  }
  #endif

+ /*
+  * Initialize parallel dump support --- should be called early in process
+  * startup.  (Currently, this is called whether or not we intend parallel
+  * activity.)
+  */
  void
  init_parallel_dump_utils(void)
  {
*************** init_parallel_dump_utils(void)
*** 130,161 ****
--- 184,226 ----
          WSADATA        wsaData;
          int            err;

+         /* Prepare for threaded operation */
          tls_index = TlsAlloc();
          mainThreadId = GetCurrentThreadId();
+
+         /* Initialize socket access */
          err = WSAStartup(MAKEWORD(2, 2), &wsaData);
          if (err != 0)
          {
              fprintf(stderr, _("%s: WSAStartup failed: %d\n"), progname, err);
              exit_nicely(1);
          }
+         /* ... and arrange to shut it down at exit */
          on_exit_nicely(shutdown_parallel_dump_utils, NULL);
          parallel_init_done = true;
      }
  #endif
  }

+ /*
+  * Find the ParallelSlot for the current worker process or thread.
+  *
+  * Returns NULL if no matching slot is found (this implies we're the master).
+  */
  static ParallelSlot *
  GetMyPSlot(ParallelState *pstate)
  {
      int            i;

      for (i = 0; i < pstate->numWorkers; i++)
+     {
  #ifdef WIN32
          if (pstate->parallelSlot[i].threadId == GetCurrentThreadId())
  #else
          if (pstate->parallelSlot[i].pid == getpid())
  #endif
              return &(pstate->parallelSlot[i]);
+     }

      return NULL;
  }
*************** GetMyPSlot(ParallelState *pstate)
*** 163,189 ****
  /*
   * A thread-local version of getLocalPQExpBuffer().
   *
!  * Non-reentrant but reduces memory leakage. (On Windows the memory leakage
!  * will be one buffer per thread, which is at least better than one per call).
   */
  static PQExpBuffer
  getThreadLocalPQExpBuffer(void)
  {
      /*
       * The Tls code goes awry if we use a static var, so we provide for both
!      * static and auto, and omit any use of the static var when using Tls.
       */
      static PQExpBuffer s_id_return = NULL;
      PQExpBuffer id_return;

- #ifdef WIN32
      if (parallel_init_done)
!         id_return = (PQExpBuffer) TlsGetValue(tls_index);        /* 0 when not set */
      else
          id_return = s_id_return;
- #else
-     id_return = s_id_return;
- #endif

      if (id_return)                /* first time through? */
      {
--- 228,252 ----
  /*
   * A thread-local version of getLocalPQExpBuffer().
   *
!  * Non-reentrant but reduces memory leakage: we'll consume one buffer per
!  * thread, which is much better than one per fmtId/fmtQualifiedId call.
   */
+ #ifdef WIN32
  static PQExpBuffer
  getThreadLocalPQExpBuffer(void)
  {
      /*
       * The Tls code goes awry if we use a static var, so we provide for both
!      * static and auto, and omit any use of the static var when using Tls. We
!      * rely on TlsGetValue() to return 0 if the value is not yet set.
       */
      static PQExpBuffer s_id_return = NULL;
      PQExpBuffer id_return;

      if (parallel_init_done)
!         id_return = (PQExpBuffer) TlsGetValue(tls_index);
      else
          id_return = s_id_return;

      if (id_return)                /* first time through? */
      {
*************** getThreadLocalPQExpBuffer(void)
*** 194,217 ****
      {
          /* new buffer */
          id_return = createPQExpBuffer();
- #ifdef WIN32
          if (parallel_init_done)
              TlsSetValue(tls_index, id_return);
          else
              s_id_return = id_return;
- #else
-         s_id_return = id_return;
- #endif
-
      }

      return id_return;
  }

  /*
!  * pg_dump and pg_restore register the Archive pointer for the exit handler
!  * (called from exit_nicely). This function mainly exists so that we can
!  * keep shutdown_info in file scope only.
   */
  void
  on_exit_close_archive(Archive *AHX)
--- 257,275 ----
      {
          /* new buffer */
          id_return = createPQExpBuffer();
          if (parallel_init_done)
              TlsSetValue(tls_index, id_return);
          else
              s_id_return = id_return;
      }

      return id_return;
  }
+ #endif   /* WIN32 */

  /*
!  * pg_dump and pg_restore call this to register the cleanup handler
!  * as soon as they've created the ArchiveHandle.
   */
  void
  on_exit_close_archive(Archive *AHX)
*************** archive_close_connection(int code, void
*** 281,292 ****
  }

  /*
   * If we have one worker that terminates for some reason, we'd like the other
   * threads to terminate as well (and not finish with their 70 GB table dump
!  * first...). Now in UNIX we can just kill these processes, and let the signal
!  * handler set wantAbort to 1. In Windows we set a termEvent and this serves
!  * as the signal for everyone to terminate.  We don't print any error message,
!  * that would just clutter the screen.
   */
  void
  checkAborting(ArchiveHandle *AH)
--- 339,357 ----
  }

  /*
+  * Check to see if we've been told to abort, and exit the process/thread if
+  * so.  We don't print any error message; that would just clutter the screen.
+  *
   * If we have one worker that terminates for some reason, we'd like the other
   * threads to terminate as well (and not finish with their 70 GB table dump
!  * first...).  In Unix, the master sends SIGTERM and the worker's signal
!  * handler sets wantAbort to 1.  In Windows we set a termEvent and this serves
!  * as the signal for worker threads to exit.  Note that while we check this
!  * fairly frequently during data transfers, an idle worker doesn't come here
!  * at all, so additional measures are needed to force shutdown.
!  *
!  * XXX in parallel restore, slow server-side operations like CREATE INDEX
!  * are not interrupted by anything we do here.  This needs more work.
   */
  void
  checkAborting(ArchiveHandle *AH)
*************** checkAborting(ArchiveHandle *AH)
*** 300,306 ****
  }

  /*
!  * Shut down any remaining workers, waiting for them to finish.
   */
  static void
  ShutdownWorkersHard(ParallelState *pstate)
--- 365,371 ----
  }

  /*
!  * Forcibly shut down any remaining workers, waiting for them to finish.
   */
  static void
  ShutdownWorkersHard(ParallelState *pstate)
*************** WaitForTerminatingWorkers(ParallelState
*** 392,401 ****
      }
  }

  #ifndef WIN32
- /* Signal handling (UNIX only) */
  static void
! sigTermHandler(int signum)
  {
      wantAbort = 1;
  }
--- 457,468 ----
      }
  }

+ /*
+  * Signal handler (UNIX only)
+  */
  #ifndef WIN32
  static void
! sigTermHandler(SIGNAL_ARGS)
  {
      wantAbort = 1;
  }
*************** sigTermHandler(int signum)
*** 407,413 ****
   * upon return.
   */
  static void
! SetupWorker(ArchiveHandle *AH, int pipefd[2], int worker)
  {
      /*
       * Call the setup worker function that's defined in the ArchiveHandle.
--- 474,480 ----
   * upon return.
   */
  static void
! RunWorker(ArchiveHandle *AH, int pipefd[2])
  {
      /*
       * Call the setup worker function that's defined in the ArchiveHandle.
*************** SetupWorker(ArchiveHandle *AH, int pipef
*** 416,447 ****

      Assert(AH->connection != NULL);

      WaitForCommands(AH, pipefd);
  }

  #ifdef WIN32
  static unsigned __stdcall
  init_spawned_worker_win32(WorkerInfo *wi)
  {
      ArchiveHandle *AH;
      int            pipefd[2] = {wi->pipeRead, wi->pipeWrite};
-     int            worker = wi->worker;

      AH = CloneArchive(wi->AH);

      free(wi);
-     SetupWorker(AH, pipefd, worker);

      DeCloneArchive(AH);
      _endthreadex(0);
      return 0;
  }
! #endif

  /*
!  * This function starts the parallel dump or restore by spawning off the
!  * worker processes in both Unix and Windows. For Windows, it creates a number
!  * of threads while it does a fork() on Unix.
   */
  ParallelState *
  ParallelBackupStart(ArchiveHandle *AH)
--- 483,526 ----

      Assert(AH->connection != NULL);

+     /*
+      * Execute commands until done.
+      */
      WaitForCommands(AH, pipefd);
  }

+ /*
+  * Thread base function for Windows
+  */
  #ifdef WIN32
  static unsigned __stdcall
  init_spawned_worker_win32(WorkerInfo *wi)
  {
      ArchiveHandle *AH;
      int            pipefd[2] = {wi->pipeRead, wi->pipeWrite};

+     /*
+      * Clone the archive so that we have our own state to work with, and in
+      * particular our own database connection.
+      */
      AH = CloneArchive(wi->AH);

      free(wi);

+     /* Run the worker ... */
+     RunWorker(AH, pipefd);
+
+     /* Clean up and exit the thread */
      DeCloneArchive(AH);
      _endthreadex(0);
      return 0;
  }
! #endif   /* WIN32 */

  /*
!  * This function starts a parallel dump or restore by spawning off the worker
!  * processes.  For Windows, it creates a number of threads; on Unix the
!  * workers are created with fork().
   */
  ParallelState *
  ParallelBackupStart(ArchiveHandle *AH)
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 471,487 ****
       * set and falls back to AHX otherwise.
       */
      shutdown_info.pstate = pstate;
-     getLocalPQExpBuffer = getThreadLocalPQExpBuffer;

  #ifdef WIN32
      tMasterThreadId = GetCurrentThreadId();
      termEvent = CreateEvent(NULL, true, false, "Terminate");
  #else
      signal(SIGTERM, sigTermHandler);
      signal(SIGINT, sigTermHandler);
      signal(SIGQUIT, sigTermHandler);
  #endif

      for (i = 0; i < pstate->numWorkers; i++)
      {
  #ifdef WIN32
--- 550,570 ----
       * set and falls back to AHX otherwise.
       */
      shutdown_info.pstate = pstate;

  #ifdef WIN32
+     /* Set up thread management state */
      tMasterThreadId = GetCurrentThreadId();
      termEvent = CreateEvent(NULL, true, false, "Terminate");
+     /* Make fmtId() and fmtQualifiedId() use thread-local storage */
+     getLocalPQExpBuffer = getThreadLocalPQExpBuffer;
  #else
+     /* Set up signal handling state */
      signal(SIGTERM, sigTermHandler);
      signal(SIGINT, sigTermHandler);
      signal(SIGQUIT, sigTermHandler);
  #endif

+     /* Create desired number of workers */
      for (i = 0; i < pstate->numWorkers; i++)
      {
  #ifdef WIN32
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 493,498 ****
--- 576,582 ----
          int            pipeMW[2],
                      pipeWM[2];

+         /* Create communication pipes for this worker */
          if (pgpipe(pipeMW) < 0 || pgpipe(pipeWM) < 0)
              exit_horribly(modulename,
                            "could not create communication channels: %s\n",
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 511,520 ****
          pstate->parallelSlot[i].pipeRevWrite = pipeWM[PIPE_WRITE];

  #ifdef WIN32
!         /* Allocate a new structure for every worker */
          wi = (WorkerInfo *) pg_malloc(sizeof(WorkerInfo));

-         wi->worker = i;
          wi->AH = AH;
          wi->pipeRead = pipeMW[PIPE_READ];
          wi->pipeWrite = pipeWM[PIPE_WRITE];
--- 595,603 ----
          pstate->parallelSlot[i].pipeRevWrite = pipeWM[PIPE_WRITE];

  #ifdef WIN32
!         /* Create transient structure to pass args to worker function */
          wi = (WorkerInfo *) pg_malloc(sizeof(WorkerInfo));

          wi->AH = AH;
          wi->pipeRead = pipeMW[PIPE_READ];
          wi->pipeWrite = pipeWM[PIPE_WRITE];
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 522,528 ****
          handle = _beginthreadex(NULL, 0, (void *) &init_spawned_worker_win32,
                                  wi, 0, &(pstate->parallelSlot[i].threadId));
          pstate->parallelSlot[i].hThread = handle;
! #else
          pid = fork();
          if (pid == 0)
          {
--- 605,611 ----
          handle = _beginthreadex(NULL, 0, (void *) &init_spawned_worker_win32,
                                  wi, 0, &(pstate->parallelSlot[i].threadId));
          pstate->parallelSlot[i].hThread = handle;
! #else                            /* !WIN32 */
          pid = fork();
          if (pid == 0)
          {
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 535,549 ****

              pstate->parallelSlot[i].pid = getpid();

-             /*
-              * Call CloneArchive on Unix as well even though technically we
-              * don't need to because fork() gives us a copy in our own address
-              * space already. But CloneArchive resets the state information
-              * and also clones the database connection (for parallel dump)
-              * which both seem kinda helpful.
-              */
-             pstate->parallelSlot[i].args->AH = CloneArchive(AH);
-
              /* close read end of Worker -> Master */
              closesocket(pipeWM[PIPE_READ]);
              /* close write end of Master -> Worker */
--- 618,623 ----
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 559,589 ****
                  closesocket(pstate->parallelSlot[j].pipeWrite);
              }

!             SetupWorker(pstate->parallelSlot[i].args->AH, pipefd, i);

              exit(0);
          }
          else if (pid < 0)
              /* fork failed */
              exit_horribly(modulename,
                            "could not create worker process: %s\n",
                            strerror(errno));

!         /* we are the Master, pid > 0 here */
!         Assert(pid > 0);

          /* close read end of Master -> Worker */
          closesocket(pipeMW[PIPE_READ]);
          /* close write end of Worker -> Master */
          closesocket(pipeWM[PIPE_WRITE]);
!
!         pstate->parallelSlot[i].pid = pid;
! #endif
      }

      /*
       * Having forked off the workers, disable SIGPIPE so that master isn't
!      * killed if it tries to send a command to a dead worker.
       */
  #ifndef WIN32
      signal(SIGPIPE, SIG_IGN);
--- 633,675 ----
                  closesocket(pstate->parallelSlot[j].pipeWrite);
              }

!             /*
!              * Call CloneArchive on Unix as well as Windows, even though
!              * technically we don't need to because fork() gives us a copy in
!              * our own address space already.  But CloneArchive resets the
!              * state information and also clones the database connection which
!              * both seem kinda helpful.
!              */
!             pstate->parallelSlot[i].args->AH = CloneArchive(AH);

+             /* Run the worker ... */
+             RunWorker(pstate->parallelSlot[i].args->AH, pipefd);
+
+             /* We can just exit(0) when done */
              exit(0);
          }
          else if (pid < 0)
+         {
              /* fork failed */
              exit_horribly(modulename,
                            "could not create worker process: %s\n",
                            strerror(errno));
+         }

!         /* In Master after successful fork */
!         pstate->parallelSlot[i].pid = pid;

          /* close read end of Master -> Worker */
          closesocket(pipeMW[PIPE_READ]);
          /* close write end of Worker -> Master */
          closesocket(pipeWM[PIPE_WRITE]);
! #endif   /* WIN32 */
      }

      /*
       * Having forked off the workers, disable SIGPIPE so that master isn't
!      * killed if it tries to send a command to a dead worker.  We don't want
!      * the workers to inherit this setting, though.
       */
  #ifndef WIN32
      signal(SIGPIPE, SIG_IGN);
*************** ParallelBackupStart(ArchiveHandle *AH)
*** 593,691 ****
  }

  /*
!  * Tell all of our workers to terminate.
!  *
!  * Pretty straightforward routine, first we tell everyone to terminate, then
!  * we listen to the workers' replies and finally close the sockets that we
!  * have used for communication.
   */
  void
  ParallelBackupEnd(ArchiveHandle *AH, ParallelState *pstate)
  {
      int            i;

      if (pstate->numWorkers == 1)
          return;

      Assert(IsEveryWorkerIdle(pstate));

!     /* close the sockets so that the workers know they can exit */
      for (i = 0; i < pstate->numWorkers; i++)
      {
          closesocket(pstate->parallelSlot[i].pipeRead);
          closesocket(pstate->parallelSlot[i].pipeWrite);
      }
      WaitForTerminatingWorkers(pstate);

      /*
!      * Remove the pstate again, so the exit handler in the parent will now
!      * again fall back to closing AH->connection (if connected).
       */
      shutdown_info.pstate = NULL;

      free(pstate->parallelSlot);
      free(pstate);
  }

-
  /*
!  * The sequence is the following (for dump, similar for restore):
!  *
!  * The master process starts the parallel backup in ParllelBackupStart, this
!  * forks the worker processes which enter WaitForCommand().
!  *
!  * The master process dispatches an individual work item to one of the worker
!  * processes in DispatchJobForTocEntry(). It calls
!  * AH->MasterStartParallelItemPtr, a routine of the output format. This
!  * function's arguments are the parents archive handle AH (containing the full
!  * catalog information), the TocEntry that the worker should work on and a
!  * T_Action act indicating whether this is a backup or a restore item.  The
!  * function then converts the TocEntry assignment into a string that is then
!  * sent over to the worker process. In the simplest case that would be
!  * something like "DUMP 1234", with 1234 being the TocEntry id.
!  *
!  * The worker receives the message in the routine pointed to by
!  * WorkerJobDumpPtr or WorkerJobRestorePtr. These are also pointers to
!  * corresponding routines of the respective output format, e.g.
!  * _WorkerJobDumpDirectory().
!  *
!  * Remember that we have forked off the workers only after we have read in the
!  * catalog. That's why our worker processes can also access the catalog
!  * information. Now they re-translate the textual representation to a TocEntry
!  * on their side and do the required action (restore or dump).
!  *
!  * The result is again a textual string that is sent back to the master and is
!  * interpreted by AH->MasterEndParallelItemPtr. This function can update state
!  * or catalog information on the master's side, depending on the reply from
!  * the worker process. In the end it returns status which is 0 for successful
!  * execution.
!  *
!  * ---------------------------------------------------------------------
!  * Master                                    Worker
!  *
!  *                                            enters WaitForCommands()
!  * DispatchJobForTocEntry(...te...)
!  *
!  * [ Worker is IDLE ]
!  *
!  * arg = (MasterStartParallelItemPtr)()
!  * send: DUMP arg
!  *                                            receive: DUMP arg
!  *                                            str = (WorkerJobDumpPtr)(arg)
!  * [ Worker is WORKING ]                    ... gets te from arg ...
!  *                                            ... dump te ...
!  *                                            send: OK DUMP info
!  *
!  * In ListenToWorkers():
!  *
!  * [ Worker is FINISHED ]
!  * receive: OK DUMP info
!  * status = (MasterEndParallelItemPtr)(info)
   *
!  * In ReapWorkerStatus(&ptr):
!  * *ptr = status;
!  * [ Worker is IDLE ]
!  * ---------------------------------------------------------------------
   */
  void
  DispatchJobForTocEntry(ArchiveHandle *AH, ParallelState *pstate, TocEntry *te,
--- 679,723 ----
  }

  /*
!  * Close down a parallel dump or restore.
   */
  void
  ParallelBackupEnd(ArchiveHandle *AH, ParallelState *pstate)
  {
      int            i;

+     /* No work if non-parallel */
      if (pstate->numWorkers == 1)
          return;

+     /* There should not be any unfinished jobs */
      Assert(IsEveryWorkerIdle(pstate));

!     /* Close the sockets so that the workers know they can exit */
      for (i = 0; i < pstate->numWorkers; i++)
      {
          closesocket(pstate->parallelSlot[i].pipeRead);
          closesocket(pstate->parallelSlot[i].pipeWrite);
      }
+
+     /* Wait for them to exit */
      WaitForTerminatingWorkers(pstate);

      /*
!      * Unlink pstate from shutdown_info, so the exit handler will again fall
!      * back to closing AH->connection (if connected).
       */
      shutdown_info.pstate = NULL;

+     /* Release state (mere neatnik-ism, since we're about to terminate) */
      free(pstate->parallelSlot);
      free(pstate);
  }

  /*
!  * Dispatch a job to some free worker (caller must ensure there is one!)
   *
!  * te is the TocEntry to be processed, act is the action to be taken on it.
   */
  void
  DispatchJobForTocEntry(ArchiveHandle *AH, ParallelState *pstate, TocEntry *te,
*************** DispatchJobForTocEntry(ArchiveHandle *AH
*** 695,714 ****
      char       *arg;

      /* our caller makes sure that at least one worker is idle */
-     Assert(GetIdleWorker(pstate) != NO_SLOT);
      worker = GetIdleWorker(pstate);
      Assert(worker != NO_SLOT);

      arg = (AH->MasterStartParallelItemPtr) (AH, te, act);

      sendMessageToWorker(pstate, worker, arg);

      pstate->parallelSlot[worker].workerStatus = WRKR_WORKING;
      pstate->parallelSlot[worker].args->te = te;
  }

  /*
!  * Find the first free parallel slot (if any).
   */
  int
  GetIdleWorker(ParallelState *pstate)
--- 727,750 ----
      char       *arg;

      /* our caller makes sure that at least one worker is idle */
      worker = GetIdleWorker(pstate);
      Assert(worker != NO_SLOT);

+     /* Construct and send command string */
      arg = (AH->MasterStartParallelItemPtr) (AH, te, act);

      sendMessageToWorker(pstate, worker, arg);

+     /* XXX aren't we leaking string here? (no, because it's static. Ick.) */
+
+     /* Remember worker is busy, and which TocEntry it's working on */
      pstate->parallelSlot[worker].workerStatus = WRKR_WORKING;
      pstate->parallelSlot[worker].args->te = te;
  }

  /*
!  * Find an idle worker and return its slot number.
!  * Return NO_SLOT if none are idle.
   */
  int
  GetIdleWorker(ParallelState *pstate)
*************** GetIdleWorker(ParallelState *pstate)
*** 716,728 ****
      int            i;

      for (i = 0; i < pstate->numWorkers; i++)
          if (pstate->parallelSlot[i].workerStatus == WRKR_IDLE)
              return i;
      return NO_SLOT;
  }

  /*
!  * Return true iff every worker process is in the WRKR_TERMINATED state.
   */
  static bool
  HasEveryWorkerTerminated(ParallelState *pstate)
--- 752,766 ----
      int            i;

      for (i = 0; i < pstate->numWorkers; i++)
+     {
          if (pstate->parallelSlot[i].workerStatus == WRKR_IDLE)
              return i;
+     }
      return NO_SLOT;
  }

  /*
!  * Return true iff every worker is in the WRKR_TERMINATED state.
   */
  static bool
  HasEveryWorkerTerminated(ParallelState *pstate)
*************** HasEveryWorkerTerminated(ParallelState *
*** 730,737 ****
--- 768,777 ----
      int            i;

      for (i = 0; i < pstate->numWorkers; i++)
+     {
          if (pstate->parallelSlot[i].workerStatus != WRKR_TERMINATED)
              return false;
+     }
      return true;
  }

*************** IsEveryWorkerIdle(ParallelState *pstate)
*** 744,782 ****
      int            i;

      for (i = 0; i < pstate->numWorkers; i++)
          if (pstate->parallelSlot[i].workerStatus != WRKR_IDLE)
              return false;
      return true;
  }

  /*
!  * ---------------------------------------------------------------------
!  * One danger of the parallel backup is a possible deadlock:
   *
   * 1) Master dumps the schema and locks all tables in ACCESS SHARE mode.
   * 2) Another process requests an ACCESS EXCLUSIVE lock (which is not granted
   *      because the master holds a conflicting ACCESS SHARE lock).
!  * 3) The worker process also requests an ACCESS SHARE lock to read the table.
!  *      The worker's not granted that lock but is enqueued behind the ACCESS
!  *      EXCLUSIVE lock request.
!  * ---------------------------------------------------------------------
   *
!  * Now what we do here is to just request a lock in ACCESS SHARE but with
!  * NOWAIT in the worker prior to touching the table. If we don't get the lock,
   * then we know that somebody else has requested an ACCESS EXCLUSIVE lock and
!  * are good to just fail the whole backup because we have detected a deadlock.
   */
  static void
! lockTableNoWait(ArchiveHandle *AH, TocEntry *te)
  {
      Archive    *AHX = (Archive *) AH;
      const char *qualId;
!     PQExpBuffer query = createPQExpBuffer();
      PGresult   *res;

!     Assert(AH->format == archDirectory);
!     Assert(strcmp(te->desc, "BLOBS") != 0);

      appendPQExpBuffer(query,
                        "SELECT pg_namespace.nspname,"
                        "       pg_class.relname "
--- 784,834 ----
      int            i;

      for (i = 0; i < pstate->numWorkers; i++)
+     {
          if (pstate->parallelSlot[i].workerStatus != WRKR_IDLE)
              return false;
+     }
      return true;
  }

  /*
!  * Acquire lock on a table to be dumped by a worker process.
!  *
!  * The master process is already holding an ACCESS SHARE lock.  Ordinarily
!  * it's no problem for a worker to get one too, but if anything else besides
!  * pg_dump is running, there's a possible deadlock:
   *
   * 1) Master dumps the schema and locks all tables in ACCESS SHARE mode.
   * 2) Another process requests an ACCESS EXCLUSIVE lock (which is not granted
   *      because the master holds a conflicting ACCESS SHARE lock).
!  * 3) A worker process also requests an ACCESS SHARE lock to read the table.
!  *      The worker is enqueued behind the ACCESS EXCLUSIVE lock request.
!  * 4) Now we have a deadlock, since the master is effectively waiting for
!  *      the worker.  The server cannot detect that, however.
   *
!  * To prevent an infinite wait, prior to touching a table in a worker, request
!  * a lock in ACCESS SHARE mode but with NOWAIT.  If we don't get the lock,
   * then we know that somebody else has requested an ACCESS EXCLUSIVE lock and
!  * so we have a deadlock.  We must fail the backup in that case.
   */
  static void
! lockTableForWorker(ArchiveHandle *AH, TocEntry *te)
  {
      Archive    *AHX = (Archive *) AH;
      const char *qualId;
!     PQExpBuffer query;
      PGresult   *res;

!     /* Nothing to do for BLOBS */
!     if (strcmp(te->desc, "BLOBS") == 0)
!         return;

+     query = createPQExpBuffer();
+
+     /*
+      * XXX this is an unbelievably expensive substitute for knowing how to dig
+      * a table name out of a TocEntry.
+      */
      appendPQExpBuffer(query,
                        "SELECT pg_namespace.nspname,"
                        "       pg_class.relname "
*************** lockTableNoWait(ArchiveHandle *AH, TocEn
*** 815,825 ****
  }

  /*
!  * That's the main routine for the worker.
!  * When it starts up it enters this routine and waits for commands from the
!  * master process. After having processed a command it comes back to here to
!  * wait for the next command. Finally it will receive a TERMINATE command and
!  * exit.
   */
  static void
  WaitForCommands(ArchiveHandle *AH, int pipefd[2])
--- 867,875 ----
  }

  /*
!  * WaitForCommands: main routine for a worker process.
!  *
!  * Read and execute commands from the master until we see EOF on the pipe.
   */
  static void
  WaitForCommands(ArchiveHandle *AH, int pipefd[2])
*************** WaitForCommands(ArchiveHandle *AH, int p
*** 827,839 ****
      char       *command;
      DumpId        dumpId;
      int            nBytes;
!     char       *str = NULL;
      TocEntry   *te;

      for (;;)
      {
          if (!(command = getMessageFromMaster(pipefd)))
          {
              PQfinish(AH->connection);
              AH->connection = NULL;
              return;
--- 877,890 ----
      char       *command;
      DumpId        dumpId;
      int            nBytes;
!     char       *str;
      TocEntry   *te;

      for (;;)
      {
          if (!(command = getMessageFromMaster(pipefd)))
          {
+             /* EOF ... clean up */
              PQfinish(AH->connection);
              AH->connection = NULL;
              return;
*************** WaitForCommands(ArchiveHandle *AH, int p
*** 841,895 ****

          if (messageStartsWith(command, "DUMP "))
          {
!             Assert(AH->format == archDirectory);
              sscanf(command + strlen("DUMP "), "%d%n", &dumpId, &nBytes);
              Assert(nBytes == strlen(command) - strlen("DUMP "));
-
              te = getTocEntryByDumpId(AH, dumpId);
              Assert(te != NULL);

!             /*
!              * Lock the table but with NOWAIT. Note that the parent is already
!              * holding a lock. If we cannot acquire another ACCESS SHARE MODE
!              * lock, then somebody else has requested an exclusive lock in the
!              * meantime.  lockTableNoWait dies in this case to prevent a
!              * deadlock.
!              */
!             if (strcmp(te->desc, "BLOBS") != 0)
!                 lockTableNoWait(AH, te);

!             /*
!              * The message we return here has been pg_malloc()ed and we are
!              * responsible for free()ing it.
!              */
              str = (AH->WorkerJobDumpPtr) (AH, te);
!             Assert(AH->connection != NULL);
              sendMessageToMaster(pipefd, str);
              free(str);
          }
          else if (messageStartsWith(command, "RESTORE "))
          {
!             Assert(AH->format == archDirectory || AH->format == archCustom);
!             Assert(AH->connection != NULL);
!
              sscanf(command + strlen("RESTORE "), "%d%n", &dumpId, &nBytes);
              Assert(nBytes == strlen(command) - strlen("RESTORE "));
-
              te = getTocEntryByDumpId(AH, dumpId);
              Assert(te != NULL);

!             /*
!              * The message we return here has been pg_malloc()ed and we are
!              * responsible for free()ing it.
!              */
              str = (AH->WorkerJobRestorePtr) (AH, te);
!             Assert(AH->connection != NULL);
              sendMessageToMaster(pipefd, str);
              free(str);
          }
          else
              exit_horribly(modulename,
!                        "unrecognized command on communication channel: %s\n",
                            command);

          /* command was pg_malloc'd and we are responsible for free()ing it. */
--- 892,935 ----

          if (messageStartsWith(command, "DUMP "))
          {
!             /* Decode the command */
              sscanf(command + strlen("DUMP "), "%d%n", &dumpId, &nBytes);
              Assert(nBytes == strlen(command) - strlen("DUMP "));
              te = getTocEntryByDumpId(AH, dumpId);
              Assert(te != NULL);

!             /* Acquire lock on this table within the worker's session */
!             lockTableForWorker(AH, te);

!             /* Perform the dump command */
              str = (AH->WorkerJobDumpPtr) (AH, te);
!
!             /* Return status to master */
              sendMessageToMaster(pipefd, str);
+
+             /* we are responsible for freeing the status string */
              free(str);
          }
          else if (messageStartsWith(command, "RESTORE "))
          {
!             /* Decode the command */
              sscanf(command + strlen("RESTORE "), "%d%n", &dumpId, &nBytes);
              Assert(nBytes == strlen(command) - strlen("RESTORE "));
              te = getTocEntryByDumpId(AH, dumpId);
              Assert(te != NULL);

!             /* Perform the restore command */
              str = (AH->WorkerJobRestorePtr) (AH, te);
!
!             /* Return status to master */
              sendMessageToMaster(pipefd, str);
+
+             /* we are responsible for freeing the status string */
              free(str);
          }
          else
              exit_horribly(modulename,
!                        "unrecognized command received from master: \"%s\"\n",
                            command);

          /* command was pg_malloc'd and we are responsible for free()ing it. */
*************** WaitForCommands(ArchiveHandle *AH, int p
*** 898,915 ****
  }

  /*
!  * ---------------------------------------------------------------------
!  * Note the status change:
   *
!  * DispatchJobForTocEntry        WRKR_IDLE -> WRKR_WORKING
!  * ListenToWorkers                WRKR_WORKING -> WRKR_FINISHED / WRKR_TERMINATED
!  * ReapWorkerStatus                WRKR_FINISHED -> WRKR_IDLE
!  * ---------------------------------------------------------------------
   *
!  * Just calling ReapWorkerStatus() when all workers are working might or might
!  * not give you an idle worker because you need to call ListenToWorkers() in
!  * between and only thereafter ReapWorkerStatus(). This is necessary in order
!  * to get and deal with the status (=result) of the worker's execution.
   */
  void
  ListenToWorkers(ArchiveHandle *AH, ParallelState *pstate, bool do_wait)
--- 938,958 ----
  }

  /*
!  * Check for status messages from workers.
   *
!  * If do_wait is true, wait to get a status message; otherwise, just return
!  * immediately if there is none available.
   *
!  * When we get a status message, we let MasterEndParallelItemPtr process it,
!  * then save the resulting status code and switch the worker's state to
!  * WRKR_FINISHED.  Later, caller must call ReapWorkerStatus() to verify
!  * that the status was "OK" and push the worker back to IDLE state.
!  *
!  * XXX Rube Goldberg would be proud of this API, but no one else should be.
!  *
!  * XXX is it worth checking for more than one status message per call?
!  * It seems somewhat unlikely that multiple workers would finish at exactly
!  * the same time.
   */
  void
  ListenToWorkers(ArchiveHandle *AH, ParallelState *pstate, bool do_wait)
*************** ListenToWorkers(ArchiveHandle *AH, Paral
*** 917,938 ****
      int            worker;
      char       *msg;

      msg = getMessageFromWorker(pstate, do_wait, &worker);

      if (!msg)
      {
          if (do_wait)
              exit_horribly(modulename, "a worker process died unexpectedly\n");
          return;
      }

      if (messageStartsWith(msg, "OK "))
      {
          char       *statusString;
-         TocEntry   *te;

-         pstate->parallelSlot[worker].workerStatus = WRKR_FINISHED;
-         te = pstate->parallelSlot[worker].args->te;
          if (messageStartsWith(msg, "OK RESTORE "))
          {
              statusString = msg + strlen("OK RESTORE ");
--- 960,982 ----
      int            worker;
      char       *msg;

+     /* Try to collect a status message */
      msg = getMessageFromWorker(pstate, do_wait, &worker);

      if (!msg)
      {
+         /* If do_wait is true, we must have detected EOF on some socket */
          if (do_wait)
              exit_horribly(modulename, "a worker process died unexpectedly\n");
          return;
      }

+     /* Process it and update our idea of the worker's status */
      if (messageStartsWith(msg, "OK "))
      {
+         TocEntry   *te = pstate->parallelSlot[worker].args->te;
          char       *statusString;

          if (messageStartsWith(msg, "OK RESTORE "))
          {
              statusString = msg + strlen("OK RESTORE ");
*************** ListenToWorkers(ArchiveHandle *AH, Paral
*** 951,972 ****
              exit_horribly(modulename,
                            "invalid message received from worker: \"%s\"\n",
                            msg);
      }
      else
          exit_horribly(modulename,
                        "invalid message received from worker: \"%s\"\n",
                        msg);

!     /* both Unix and Win32 return pg_malloc()ed space, so we free it */
      free(msg);
  }

  /*
!  * This function is executed in the master process.
   *
!  * This function is used to get the return value of a terminated worker
!  * process. If a process has terminated, its status is stored in *status and
!  * the id of the worker is returned.
   */
  int
  ReapWorkerStatus(ParallelState *pstate, int *status)
--- 995,1017 ----
              exit_horribly(modulename,
                            "invalid message received from worker: \"%s\"\n",
                            msg);
+         pstate->parallelSlot[worker].workerStatus = WRKR_FINISHED;
      }
      else
          exit_horribly(modulename,
                        "invalid message received from worker: \"%s\"\n",
                        msg);

!     /* Free the string returned from getMessageFromWorker */
      free(msg);
  }

  /*
!  * Check to see if any worker is in WRKR_FINISHED state.  If so,
!  * return its command status code into *status, reset it to IDLE state,
!  * and return its slot number.  Otherwise return NO_SLOT.
   *
!  * This function is executed in the master process.
   */
  int
  ReapWorkerStatus(ParallelState *pstate, int *status)
*************** ReapWorkerStatus(ParallelState *pstate,
*** 987,995 ****
  }

  /*
!  * This function is executed in the master process.
   *
!  * It looks for an idle worker process and only returns if there is one.
   */
  void
  EnsureIdleWorker(ArchiveHandle *AH, ParallelState *pstate)
--- 1032,1047 ----
  }

  /*
!  * Wait, if necessary, until we have at least one idle worker.
!  * Reap worker status as necessary to move FINISHED workers to IDLE state.
   *
!  * We assume that no extra processing is required when reaping a finished
!  * command, except for checking that the status was OK (zero).
!  * Caution: that assumption means that this function can only be used in
!  * parallel dump, not parallel restore, because the latter has a more
!  * complex set of rules about handling status.
!  *
!  * This function is executed in the master process.
   */
  void
  EnsureIdleWorker(ArchiveHandle *AH, ParallelState *pstate)
*************** EnsureIdleWorker(ArchiveHandle *AH, Para
*** 1029,1037 ****
  }

  /*
!  * This function is executed in the master process.
   *
!  * It waits for all workers to terminate.
   */
  void
  EnsureWorkersFinished(ArchiveHandle *AH, ParallelState *pstate)
--- 1081,1095 ----
  }

  /*
!  * Wait for all workers to be idle.
   *
!  * We assume that no extra processing is required when reaping a finished
!  * command, except for checking that the status was OK (zero).
!  * Caution: that assumption means that this function can only be used in
!  * parallel dump, not parallel restore, because the latter has a more
!  * complex set of rules about handling status.
!  *
!  * This function is executed in the master process.
   */
  void
  EnsureWorkersFinished(ArchiveHandle *AH, ParallelState *pstate)
*************** EnsureWorkersFinished(ArchiveHandle *AH,
*** 1053,1062 ****
  }

  /*
!  * This function is executed in the worker process.
   *
!  * It returns the next message on the communication channel, blocking until it
!  * becomes available.
   */
  static char *
  getMessageFromMaster(int pipefd[2])
--- 1111,1121 ----
  }

  /*
!  * Read one command message from the master, blocking if necessary
!  * until one is available, and return it as a malloc'd string.
!  * On EOF, return NULL.
   *
!  * This function is executed in worker processes.
   */
  static char *
  getMessageFromMaster(int pipefd[2])
*************** getMessageFromMaster(int pipefd[2])
*** 1065,1073 ****
  }

  /*
!  * This function is executed in the worker process.
   *
!  * It sends a message to the master on the communication channel.
   */
  static void
  sendMessageToMaster(int pipefd[2], const char *str)
--- 1124,1132 ----
  }

  /*
!  * Send a status message to the master.
   *
!  * This function is executed in worker processes.
   */
  static void
  sendMessageToMaster(int pipefd[2], const char *str)
*************** sendMessageToMaster(int pipefd[2], const
*** 1081,1089 ****
  }

  /*
!  * A select loop that repeats calling select until a descriptor in the read
!  * set becomes readable. On Windows we have to check for the termination event
!  * from time to time, on Unix we can just block forever.
   */
  static int
  select_loop(int maxFd, fd_set *workerset)
--- 1140,1147 ----
  }

  /*
!  * Wait until some descriptor in "workerset" becomes readable.
!  * Returns -1 on error, else the number of readable descriptors.
   */
  static int
  select_loop(int maxFd, fd_set *workerset)
*************** select_loop(int maxFd, fd_set *workerset
*** 1092,1104 ****
      fd_set        saveSet = *workerset;

  #ifdef WIN32
-     /* should always be the master */
-     Assert(tMasterThreadId == GetCurrentThreadId());
-
      for (;;)
      {
          /*
           * sleep a quarter of a second before checking if we should terminate.
           */
          struct timeval tv = {0, 250000};

--- 1150,1160 ----
      fd_set        saveSet = *workerset;

  #ifdef WIN32
      for (;;)
      {
          /*
           * sleep a quarter of a second before checking if we should terminate.
+          * XXX this certainly looks useless, why not just wait indefinitely?
           */
          struct timeval tv = {0, 250000};

*************** select_loop(int maxFd, fd_set *workerset
*** 1110,1117 ****
          if (i)
              break;
      }
! #else                            /* UNIX */
!
      for (;;)
      {
          *workerset = saveSet;
--- 1166,1172 ----
          if (i)
              break;
      }
! #else                            /* !WIN32 */
      for (;;)
      {
          *workerset = saveSet;
*************** select_loop(int maxFd, fd_set *workerset
*** 1131,1149 ****
              continue;
          break;
      }
! #endif

      return i;
  }


  /*
!  * This function is executed in the master process.
   *
!  * It returns the next message from the worker on the communication channel,
!  * optionally blocking (do_wait) until it becomes available.
   *
!  * The id of the worker is returned in *worker.
   */
  static char *
  getMessageFromWorker(ParallelState *pstate, bool do_wait, int *worker)
--- 1186,1210 ----
              continue;
          break;
      }
! #endif   /* WIN32 */

      return i;
  }


  /*
!  * Check for messages from worker processes.
   *
!  * If a message is available, return it as a malloc'd string, and put the
!  * index of the sending worker in *worker.
   *
!  * If nothing is available, wait if "do_wait" is true, else return NULL.
!  *
!  * If we detect EOF on any socket, we'll return NULL.  It's not great that
!  * that's hard to distinguish from the no-data-available case, but for now
!  * our one caller is okay with that.
!  *
!  * This function is executed in the master process.
   */
  static char *
  getMessageFromWorker(ParallelState *pstate, bool do_wait, int *worker)
*************** getMessageFromWorker(ParallelState *psta
*** 1153,1166 ****
      int            maxFd = -1;
      struct timeval nowait = {0, 0};

      FD_ZERO(&workerset);
-
      for (i = 0; i < pstate->numWorkers; i++)
      {
          if (pstate->parallelSlot[i].workerStatus == WRKR_TERMINATED)
              continue;
          FD_SET(pstate->parallelSlot[i].pipeRead, &workerset);
-         /* actually WIN32 ignores the first parameter to select()... */
          if (pstate->parallelSlot[i].pipeRead > maxFd)
              maxFd = pstate->parallelSlot[i].pipeRead;
      }
--- 1214,1226 ----
      int            maxFd = -1;
      struct timeval nowait = {0, 0};

+     /* construct bitmap of socket descriptors for select() */
      FD_ZERO(&workerset);
      for (i = 0; i < pstate->numWorkers; i++)
      {
          if (pstate->parallelSlot[i].workerStatus == WRKR_TERMINATED)
              continue;
          FD_SET(pstate->parallelSlot[i].pipeRead, &workerset);
          if (pstate->parallelSlot[i].pipeRead > maxFd)
              maxFd = pstate->parallelSlot[i].pipeRead;
      }
*************** getMessageFromWorker(ParallelState *psta
*** 1177,1183 ****
      }

      if (i < 0)
!         exit_horribly(modulename, "error in ListenToWorkers(): %s\n", strerror(errno));

      for (i = 0; i < pstate->numWorkers; i++)
      {
--- 1237,1243 ----
      }

      if (i < 0)
!         exit_horribly(modulename, "select() failed: %s\n", strerror(errno));

      for (i = 0; i < pstate->numWorkers; i++)
      {
*************** getMessageFromWorker(ParallelState *psta
*** 1186,1191 ****
--- 1246,1261 ----
          if (!FD_ISSET(pstate->parallelSlot[i].pipeRead, &workerset))
              continue;

+         /*
+          * Read the message if any.  If the socket is ready because of EOF,
+          * we'll return NULL instead (and the socket will stay ready, so the
+          * condition will persist).
+          *
+          * Note: because this is a blocking read, we'll wait if only part of
+          * the message is available.  Waiting a long time would be bad, but
+          * since worker status messages are short and are always sent in one
+          * operation, it shouldn't be a problem in practice.
+          */
          msg = readMessageFromPipe(pstate->parallelSlot[i].pipeRead);
          *worker = i;
          return msg;
*************** getMessageFromWorker(ParallelState *psta
*** 1195,1203 ****
  }

  /*
!  * This function is executed in the master process.
   *
!  * It sends a message to a certain worker on the communication channel.
   */
  static void
  sendMessageToWorker(ParallelState *pstate, int worker, const char *str)
--- 1265,1273 ----
  }

  /*
!  * Send a command message to the specified worker process.
   *
!  * This function is executed in the master process.
   */
  static void
  sendMessageToWorker(ParallelState *pstate, int worker, const char *str)
*************** sendMessageToWorker(ParallelState *pstat
*** 1208,1214 ****
      {
          /*
           * If we're already aborting anyway, don't care if we succeed or not.
!          * The child might have gone already.
           */
  #ifndef WIN32
          if (!aborting)
--- 1278,1285 ----
      {
          /*
           * If we're already aborting anyway, don't care if we succeed or not.
!          * The child might have gone already.  (XXX but if we're aborting
!          * already, why are we here at all?)
           */
  #ifndef WIN32
          if (!aborting)
*************** sendMessageToWorker(ParallelState *pstat
*** 1220,1227 ****
  }

  /*
!  * The underlying function to read a message from the communication channel
!  * (fd) with optional blocking (do_wait).
   */
  static char *
  readMessageFromPipe(int fd)
--- 1291,1301 ----
  }

  /*
!  * Read one message from the specified pipe (fd), blocking if necessary
!  * until one is available, and return it as a malloc'd string.
!  * On EOF, return NULL.
!  *
!  * A "message" on the channel is just a null-terminated string.
   */
  static char *
  readMessageFromPipe(int fd)
*************** readMessageFromPipe(int fd)
*** 1232,1290 ****
      int            ret;

      /*
!      * The problem here is that we need to deal with several possibilities: we
!      * could receive only a partial message or several messages at once. The
!      * caller expects us to return exactly one message however.
!      *
!      * We could either read in as much as we can and keep track of what we
!      * delivered back to the caller or we just read byte by byte. Once we see
!      * (char) 0, we know that it's the message's end. This would be quite
!      * inefficient for more data but since we are reading only on the command
!      * channel, the performance loss does not seem worth the trouble of
!      * keeping internal states for different file descriptors.
       */
      bufsize = 64;                /* could be any number */
      msg = (char *) pg_malloc(bufsize);
-
      msgsize = 0;
      for (;;)
      {
!         Assert(msgsize <= bufsize);
          ret = piperead(fd, msg + msgsize, 1);
-
-         /* worker has closed the connection or another error happened */
          if (ret <= 0)
!             break;

          Assert(ret == 1);

          if (msg[msgsize] == '\0')
!             return msg;

          msgsize++;
!         if (msgsize == bufsize)
          {
!             /* could be any number */
!             bufsize += 16;
              msg = (char *) pg_realloc(msg, bufsize);
          }
      }

!     /*
!      * Worker has closed the connection, make sure to clean up before return
!      * since we are not returning msg (but did allocate it).
!      */
      pg_free(msg);
-
      return NULL;
  }

  #ifdef WIN32
  /*
!  * This is a replacement version of pipe for Win32 which allows returned
!  * handles to be used in select(). Note that read/write calls must be replaced
!  * with recv/send.  "handles" have to be integers so we check for errors then
!  * cast to integers.
   */
  static int
  pgpipe(int handles[2])
--- 1306,1357 ----
      int            ret;

      /*
!      * In theory, if we let piperead() read multiple bytes, it might give us
!      * back fragments of multiple messages.  (That can't actually occur, since
!      * neither master nor workers send more than one message without waiting
!      * for a reply, but we don't wish to assume that here.)  For simplicity,
!      * read a byte at a time until we get the terminating '\0'.  This method
!      * is a bit inefficient, but since this is only used for relatively short
!      * command and status strings, it shouldn't matter.
       */
      bufsize = 64;                /* could be any number */
      msg = (char *) pg_malloc(bufsize);
      msgsize = 0;
      for (;;)
      {
!         Assert(msgsize < bufsize);
          ret = piperead(fd, msg + msgsize, 1);
          if (ret <= 0)
!             break;                /* error or connection closure */

          Assert(ret == 1);

          if (msg[msgsize] == '\0')
!             return msg;            /* collected whole message */

          msgsize++;
!         if (msgsize == bufsize) /* enlarge buffer if needed */
          {
!             bufsize += 16;        /* could be any number */
              msg = (char *) pg_realloc(msg, bufsize);
          }
      }

!     /* Other end has closed the connection */
      pg_free(msg);
      return NULL;
  }

  #ifdef WIN32
  /*
!  * This is a replacement version of pipe(2) for Windows which allows the pipe
!  * handles to be used in select().
!  *
!  * Reads and writes on the pipe must go through piperead()/pipewrite().
!  *
!  * For consistency with Unix we declare the returned handles as "int".
!  * This is okay even on WIN64 because system handles are not more than
!  * 32 bits wide, but we do have to do some casting.
   */
  static int
  pgpipe(int handles[2])
*************** pgpipe(int handles[2])
*** 1349,1354 ****
--- 1416,1423 ----
      {
          write_msg(modulename, "pgpipe: could not connect socket: error code %d\n",
                    WSAGetLastError());
+         closesocket(handles[1]);
+         handles[1] = -1;
          closesocket(s);
          return -1;
      }
*************** pgpipe(int handles[2])
*** 1367,1381 ****
      return 0;
  }

  static int
  piperead(int s, char *buf, int len)
  {
      int            ret = recv(s, buf, len, 0);

      if (ret < 0 && WSAGetLastError() == WSAECONNRESET)
!         /* EOF on the pipe! (win32 socket based implementation) */
          ret = 0;
      return ret;
  }

! #endif
--- 1436,1455 ----
      return 0;
  }

+ /*
+  * Windows implementation of reading from a pipe.
+  */
  static int
  piperead(int s, char *buf, int len)
  {
      int            ret = recv(s, buf, len, 0);

      if (ret < 0 && WSAGetLastError() == WSAECONNRESET)
!     {
!         /* EOF on the pipe! */
          ret = 0;
+     }
      return ret;
  }

! #endif   /* WIN32 */
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 8ffd8f7..ad8e132 100644
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
*************** _allocAH(const char *FileSpec, const Arc
*** 2326,2331 ****
--- 2326,2334 ----
      return AH;
  }

+ /*
+  * Write out all data (tables & blobs)
+  */
  void
  WriteDataChunks(ArchiveHandle *AH, ParallelState *pstate)
  {
*************** WriteDataChunks(ArchiveHandle *AH, Paral
*** 2343,2357 ****
          {
              /*
               * If we are in a parallel backup, then we are always the master
!              * process.
               */
              EnsureIdleWorker(AH, pstate);
-             Assert(GetIdleWorker(pstate) != NO_SLOT);
              DispatchJobForTocEntry(AH, pstate, te, ACT_DUMP);
          }
          else
              WriteDataChunksForTocEntry(AH, te);
      }
      EnsureWorkersFinished(AH, pstate);
  }

--- 2346,2363 ----
          {
              /*
               * If we are in a parallel backup, then we are always the master
!              * process.  Dispatch each data-transfer job to a worker.
               */
              EnsureIdleWorker(AH, pstate);
              DispatchJobForTocEntry(AH, pstate, te, ACT_DUMP);
          }
          else
              WriteDataChunksForTocEntry(AH, te);
      }
+
+     /*
+      * If parallel, wait for workers to finish.
+      */
      EnsureWorkersFinished(AH, pstate);
  }

*************** restore_toc_entries_parallel(ArchiveHand
*** 3819,3831 ****

              par_list_remove(next_work_item);

-             Assert(GetIdleWorker(pstate) != NO_SLOT);
              DispatchJobForTocEntry(AH, pstate, next_work_item, ACT_RESTORE);
          }
          else
          {
              /* at least one child is working and we have nothing ready. */
-             Assert(!IsEveryWorkerIdle(pstate));
          }

          for (;;)
--- 3825,3835 ----

Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Alvaro Herrera
Дата:
Regarding this:

> *************** select_loop(int maxFd, fd_set *workerset
> *** 1092,1104 ****
> --- 1150,1160 ----
>       fd_set        saveSet = *workerset;
>   
>   #ifdef WIN32
>       for (;;)
>       {
>           /*
>            * sleep a quarter of a second before checking if we should terminate.
> +          * XXX this certainly looks useless, why not just wait indefinitely?
>            */
>           struct timeval tv = {0, 250000};
>   

There's another select_loop() in vacuumdb.c suggesting that the timeout
is used to check for cancel requests; as I understood while working on
the vacuumdb changes, select() is not interrupted in that case.  I
suppose that means it's necessary here also.  But on the other hand it's
quite possible that the original developer just copied what was in
pg_dump and that it's not actually needed; if that's the case, perhaps
it's better to rip it out from both places.

https://www.postgresql.org/message-id/20150122174601.GB1663@alvh.no-ip.org

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> Regarding this:
>> +          * XXX this certainly looks useless, why not just wait indefinitely?

> There's another select_loop() in vacuumdb.c suggesting that the timeout
> is used to check for cancel requests; as I understood while working on
> the vacuumdb changes, select() is not interrupted in that case.  I
> suppose that means it's necessary here also.  But on the other hand it's
> quite possible that the original developer just copied what was in
> pg_dump and that it's not actually needed; if that's the case, perhaps
> it's better to rip it out from both places.

Ah, interesting.  That ties into something else I was wondering about,
which is how we could get useful control-C cancellation on Windows.
It looks like the vacuumdb.c version of this code actually is tied
into an interrupt handler, but whoever copied it for parallel.c just
ripped out the CancelRequested checks, making the looping behavior
pretty useless.

For pg_restore (parallel or not) it would be useful if the program
didn't just fall over at control-C but actually sent cancel requests
to the backend(s).  It's not such a problem if we're transferring
data, but if we're waiting for some slow operation like CREATE INDEX,
the current behavior isn't very good.  On the Unix side we have some
SIGINT infrastructure there already, but I don't see any for Windows.

So now I'm thinking we should leave that alone, with the expectation
that we'll be putting CancelRequested checks back in at some point.

> https://www.postgresql.org/message-id/20150122174601.GB1663@alvh.no-ip.org

Hmm, did the patch you're discussing there get committed?
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Amit Kapila
Дата:
On Sat, May 28, 2016 at 5:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> > Regarding this:
> >> +             * XXX this certainly looks useless, why not just wait indefinitely?
>
> > There's another select_loop() in vacuumdb.c suggesting that the timeout
> > is used to check for cancel requests; as I understood while working on
> > the vacuumdb changes, select() is not interrupted in that case.  I
> > suppose that means it's necessary here also.  But on the other hand it's
> > quite possible that the original developer just copied what was in
> > pg_dump and that it's not actually needed; if that's the case, perhaps
> > it's better to rip it out from both places.
>
> Ah, interesting.  That ties into something else I was wondering about,
> which is how we could get useful control-C cancellation on Windows.
> It looks like the vacuumdb.c version of this code actually is tied
> into an interrupt handler, but whoever copied it for parallel.c just
> ripped out the CancelRequested checks, making the looping behavior
> pretty useless.
>

It seems to me that CancelRequested checks were introduced in vacuumdb.c as part of commit a1792320 and select_loop for parallel.c version exists from commit 9e257a18 which got committed earlier.  I think control-C handling for Windows in parallel.c is missing or if there is some way to deal with it, clearly it is not same as what we do in vacuumdb.c.

> For pg_restore (parallel or not) it would be useful if the program
> didn't just fall over at control-C but actually sent cancel requests
> to the backend(s).  It's not such a problem if we're transferring
> data, but if we're waiting for some slow operation like CREATE INDEX,
> the current behavior isn't very good.  On the Unix side we have some
> SIGINT infrastructure there already, but I don't see any for Windows.
>
> So now I'm thinking we should leave that alone, with the expectation
> that we'll be putting CancelRequested checks back in at some point.
>
> > https://www.postgresql.org/message-id/20150122174601.GB1663@alvh.no-ip.org
>
> Hmm, did the patch you're discussing there get committed?
>

Yes, it was committed - a1792320



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Amit Kapila <amit.kapila16@gmail.com> writes:
> On Sat, May 28, 2016 at 5:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> It looks like the vacuumdb.c version of this code actually is tied
>> into an interrupt handler, but whoever copied it for parallel.c just
>> ripped out the CancelRequested checks, making the looping behavior
>> pretty useless.

> It seems to me that CancelRequested checks were introduced in vacuumdb.c as
> part of commit a1792320 and select_loop for parallel.c version exists from
> commit 9e257a18 which got committed earlier.

Huh, interesting.  I wonder how parallel.c's select_loop got to be like
that then?  The git history offers no clue: it is essentially the same as
HEAD as far back as the initial commit of parallel.c.  It certainly looks
like someone intended to introduce a cancel check and never did, or had
one and took it out without simplifying the rest of the logic.

Anyway, AFAICS the time limit on select() wait is completely useless in
the code as it stands; but we'll likely want to add a cancel check there,
so ripping it out wouldn't be a good plan.  I'll change the added comment.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
Hello,

At Fri, 27 May 2016 13:20:20 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <14603.1464369620@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > By the way, the reason of the "invalid snapshot identifier" is
> > that some worker threads try to use it after the connection on
> > the first worker closed.
> 
> ... BTW, I don't quite see what the issue is there.  The snapshot is
> exported from the master session, so errors in worker sessions should not
> cause such failures in other workers.  And I don't see any such failure
> when setting up a scenario that will cause a worker to fail on Linux.
> The "invalid snapshot identifier" bleats would make sense if you had
> gotten a server-side error (and transaction abort) in the master session,
> but I don't see any evidence that that happened in that example.  Might be
> worth seeing if that's reproducible.

The master session died from lack of libz and the failure of
compressLevel's propagation already fixed. Some of the children
that started transactions after the master's death will get the
error.

Similary, sudden close of the session of the master child at very
early in its transaction could cause the same symptom but it
seems not likely if master surely arrives at command-waiting, or
"safe", state.

If we want prevent it perfectly, one solution could be that
non-master children explicitly wait the master to arrive at the
"safe" state before starting their transactions. But I suppose it
is not needed here.

Does this make sense?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> At Fri, 27 May 2016 13:20:20 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <14603.1464369620@sss.pgh.pa.us>
>> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
>>> By the way, the reason of the "invalid snapshot identifier" is
>>> that some worker threads try to use it after the connection on
>>> the first worker closed.

>> ... BTW, I don't quite see what the issue is there.

> The master session died from lack of libz and the failure of
> compressLevel's propagation already fixed. Some of the children
> that started transactions after the master's death will get the
> error.

I don't think I believe that theory, because it would require the master
to not notice the lack of libz before it launches worker processes, but
instead while the workers are working.  But AFAICS, while there are worker
processes open, the master does nothing except wait for workers and
dispatch new jobs to them; it does no database work of its own.  So the
libz-isn't-there error has to have occurred in one of the workers.

> If we want prevent it perfectly, one solution could be that
> non-master children explicitly wait the master to arrive at the
> "safe" state before starting their transactions. But I suppose it
> is not needed here.

Actually, I believe the problem is in archive_close_connection, around
line 295 in HEAD: once the master realizes that one child has failed,
it first closes its own database connection and only second tries to kill
the remaining children.  So there's a race condition wherein remaining
children have time to see the missing-snapshot error.

In the patch I posted yesterday, I reversed the order of those two
steps, which should fix this problem in most scenarios:
https://www.postgresql.org/message-id/7005.1464657274@sss.pgh.pa.us
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Tue, 31 May 2016 12:29:50 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <7445.1464712190@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > At Fri, 27 May 2016 13:20:20 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <14603.1464369620@sss.pgh.pa.us>
> >> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> >>> By the way, the reason of the "invalid snapshot identifier" is
> >>> that some worker threads try to use it after the connection on
> >>> the first worker closed.
> 
> >> ... BTW, I don't quite see what the issue is there.
> 
> > The master session died from lack of libz and the failure of
> > compressLevel's propagation already fixed. Some of the children
> > that started transactions after the master's death will get the
> > error.
> 
> I don't think I believe that theory, because it would require the master
> to not notice the lack of libz before it launches worker processes, but
> instead while the workers are working.

The master actually *didn't* notice the lack of libz until it
launces worker processes before cae2bb1. So the current master
don't suffer the problem, but it is not desirable that sudden
death from any reason of a child causes this kind of behavior.

>  But AFAICS, while there are worker
> processes open, the master does nothing except wait for workers and
> dispatch new jobs to them; it does no database work of its own.  So the
> libz-isn't-there error has to have occurred in one of the workers.

Yes, the firstly-commanded worker dies from that then the master
disconencts its connection owning the snapshot before terminating
any other workers. It occurs with the current master (9ee56df)
minus cae2bb1, having --without-zlib at configure.

> > If we want prevent it perfectly, one solution could be that
> > non-master children explicitly wait the master to arrive at the
> > "safe" state before starting their transactions. But I suppose it
> > is not needed here.
> 
> Actually, I believe the problem is in archive_close_connection, around
> line 295 in HEAD: once the master realizes that one child has failed,
> it first closes its own database connection and only second tries to kill
> the remaining children.  So there's a race condition wherein remaining
> children have time to see the missing-snapshot error.

Agreed.

> In the patch I posted yesterday, I reversed the order of those two
> steps, which should fix this problem in most scenarios:
> https://www.postgresql.org/message-id/7005.1464657274@sss.pgh.pa.us

Yeah, just transposing DisconnectDatabase and ShutdownWorkersHard
in archive_close_connection fixed the problem.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
Apart from the invalid snapshot problem, I looked the patch
previously mentioned mainly for Windows.

At Tue, 31 May 2016 12:29:50 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <7445.1464712190@sss.pgh.pa.us>
> In the patch I posted yesterday, I reversed the order of those two
> steps, which should fix this problem in most scenarios:
> https://www.postgresql.org/message-id/7005.1464657274@sss.pgh.pa.us

Entering ctrl-C while parallel pg_dump is running, on my console
I saw a repetition of the following error message, which is not
seen on Linux thanks to forcible termination of worker processes
in sigTermHandler().

> pg_dump: Dumping the contents of table "t" failed: PQgetResult() failed.
> pg_dump: Error message from server: ERROR:  canceling statement due to user request
> pg_dump: The command was: COPY public.t (a) TO stdout;

We could provide a global flag (wantAbort?) that inform the
canceling of queries but it needs adding checks for it
everywhere.

Even though the threads started by beginthread cannot be
terminated cleanly from outside, but the whole process will soon
terminate anyway, so we could use TreminateThread. This seems
working. (Attached patch) We might be allowed to do exit() in
colsoleHandler().


Other questions follow.

Is there any reason for the name "ourAH" not to be "myAH"?

setup_cancel_handler looks somewhat bizarre. It eventually works
only for the main process/thread and does nothing for workers. It
is enough to be run once before forking in ParalleBackupStart and
that makes handler_set unnecessary.

In EndDBCopyMode, the result of PQgetResult is abandoned. This
can leak memory and such usage is not seen elsewhere in the
source tree. Shouldn't we hold the result and PQclear it? (Mainly
as a convention, not for benefit.)

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index cf97d9c..fb48a02 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -618,6 +618,12 @@ consoleHandler(DWORD dwCtrlType)            {                ArchiveHandle *AH =
signal_info.pstate->parallelSlot[i].args->AH;
+                /*
+                 * Using TerminateThread here may leave some resources leaked
+                 * but whole this process will soon die.
+                 */
+                TerminateThread(signal_info.pstate->parallelSlot[i].hThread, 0);
+                if (AH != NULL && AH->connCancel != NULL)                    (void) PQcancel(AH->connCancel, errbuf,
sizeof(errbuf));           }
 
@@ -632,6 +638,8 @@ consoleHandler(DWORD dwCtrlType)                            errbuf, sizeof(errbuf));
LeaveCriticalSection(&signal_info_lock);
+
+        write_msg(NULL, "terminated by user\n");    }    /* Always return FALSE to allow signal handling to continue
*/

Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> Apart from the invalid snapshot problem, I looked the patch
> previously mentioned mainly for Windows.

Thanks for looking!

> Even though the threads started by beginthread cannot be
> terminated cleanly from outside, but the whole process will soon
> terminate anyway, so we could use TreminateThread. This seems
> working. (Attached patch)

Seems reasonable to me; I was unhappy about the lack of any direct
equivalent to the child SIGTERMs that the Unix code does.

> Is there any reason for the name "ourAH" not to be "myAH"?

Don't much care, I'll change it.

> setup_cancel_handler looks somewhat bizarre. It eventually works
> only for the main process/thread and does nothing for workers. It
> is enough to be run once before forking in ParalleBackupStart and
> that makes handler_set unnecessary.

No, because we also want it to work in non-parallel cases.  As coded,
we'll establish the handler whenever set_archive_cancel_info is first
called, which will be in the first ConnectDatabase() call.  It would
be possible to do that someplace else maybe, but it would require
more code changes than just attaching it to set_archive_cancel_info.

> In EndDBCopyMode, the result of PQgetResult is abandoned. This
> can leak memory and such usage is not seen elsewhere in the
> source tree. Shouldn't we hold the result and PQclear it? (Mainly
> as a convention, not for benefit.)

There should never be any non-null result, so that seems like a waste
of code to me.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Alvaro Herrera
Дата:
Tom Lane wrote:
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > Apart from the invalid snapshot problem, I looked the patch
> > previously mentioned mainly for Windows.
> 
> Thanks for looking!
> 
> > Even though the threads started by beginthread cannot be
> > terminated cleanly from outside, but the whole process will soon
> > terminate anyway, so we could use TreminateThread. This seems
> > working. (Attached patch)
> 
> Seems reasonable to me; I was unhappy about the lack of any direct
> equivalent to the child SIGTERMs that the Unix code does.

Given this testing, it's clear that the timeout on select() is useless;
we could get rid of it in vacuumdb.c too.  I'll post a patch later.

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Thu, 2 Jun 2016 12:17:11 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in
<20160602161711.GA239156@alvherre.pgsql>
> Tom Lane wrote:
> > Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > > Apart from the invalid snapshot problem, I looked the patch
> > > previously mentioned mainly for Windows.
> > 
> > Thanks for looking!
> > 
> > > Even though the threads started by beginthread cannot be
> > > terminated cleanly from outside, but the whole process will soon
> > > terminate anyway, so we could use TreminateThread. This seems
> > > working. (Attached patch)
> > 
> > Seems reasonable to me; I was unhappy about the lack of any direct
> > equivalent to the child SIGTERMs that the Unix code does.

For sure, any of the "dangers" of TerminateThread don't matter
for this case.

https://msdn.microsoft.com/en-us/em-us/library/windows/desktop/ms686717(v=vs.85).aspx

> If the target thread owns a critical section, the critical
> section will not be released.
> 
> If the target thread is allocating memory from the heap, the heap
> lock will not be released.
> 
> If the target thread is executing certain kernel32 calls when it
> is terminated, the kernel32 state for the thread's process could
> be inconsistent.
> 
> If the target thread is manipulating the global state of a shared
> DLL, the state of the DLL could be destroyed, affecting other
> users of the DLL.



> Given this testing, it's clear that the timeout on select() is useless;
> we could get rid of it in vacuumdb.c too.  I'll post a patch later.

Agreed. Command pipes may be in blocking mode for the case.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> For sure, any of the "dangers" of TerminateThread don't matter
> for this case.

I think that this one:

>> If the target thread is allocating memory from the heap, the heap
>> lock will not be released.

is potentially a hazard, which is why I made sure to use write_stderr
later on in the console interrupt handler.  Your original suggestion
to use write_msg would end up going through fprintf, which might well
use malloc internally.  (It's possible that Windows' version of write()
could too, I suppose, but that's probably as low-level as we are
going to get.)
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
How about silencing the workers on termination?

# Build on Windows (with VC?) is very time consuming...

At Fri, 03 Jun 2016 09:44:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <11515.1464961470@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > For sure, any of the "dangers" of TerminateThread don't matter
> > for this case.
> 
> I think that this one:
> 
> >> If the target thread is allocating memory from the heap, the heap
> >> lock will not be released.
> 
> is potentially a hazard, which is why I made sure to use write_stderr
> later on in the console interrupt handler.  Your original suggestion
> to use write_msg would end up going through fprintf, which might well
> use malloc internally.  (It's possible that Windows' version of write()
> could too, I suppose, but that's probably as low-level as we are
> going to get.)

I have to admit that I forgot about the possible malloc's, and
PQcancel() can be blocked from the same reason. What is worse,
even if we managed to avoid this particular disaster, the MSDN
page says the problems are *for example*. So if we don't allow
any possible unknown blocking on ctrl-C termination and want to
forcibly terminate workers, we might have to use process instead
of thread for worker entity. (This might reduce the difference
related to WIN32..)

We also might be able to cause immediate process termination for
the second Ctrl-C but this seems to lead to other issues.

If the issue to be settled here is the unwanted error messages,
we could set a flag to instruct write_msg to sit silent. Anyway
the workers should have been dead that time so discarding any
error messages don't matter.

What do you think about this?


Timeout for select still seems to be needless.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index fb48a02..b74a396 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -147,7 +147,11 @@ static DWORD tls_index;/* globally visible variables (needed by exit_nicely) */bool
parallel_init_done= false;
 
-DWORD        mainThreadId;
+bool        is_terminating = false;
+static DWORD        mainThreadId;
+static bool            handler_active = false;
+static DWORD        handlerThreadId;
+#endif   /* WIN32 */static const char *modulename = gettext_noop("parallel archiver");
@@ -180,9 +184,26 @@ static char *readMessageFromPipe(int fd);/*
- * Shutdown callback to clean up socket access
+ * Return true if I am the main thread
+ *
+ * This function regards the console handler thread as the main thread. We
+ * need a separate boolean for validity of the handlerThreadId since an
+ * invalid value for thread id is not defined. */#ifdef WIN32
+bool
+am_main_thread(void)
+{
+  DWORD current_thread_id = GetCurrentThreadId();
+
+  return (mainThreadId == current_thread_id ||
+          (handler_active &&
+           handlerThreadId == current_thread_id));
+}
+
+/*
+ * Shutdown callback to clean up socket access
+ */static voidshutdown_parallel_dump_utils(int code, void *unused){
@@ -190,7 +211,7 @@ shutdown_parallel_dump_utils(int code, void *unused)    if (mainThreadId == GetCurrentThreadId())
    WSACleanup();}
 
-#endif
+#endif /* ifdef WIN32 *//* * Initialize parallel dump support --- should be called early in process
@@ -390,6 +411,8 @@ ShutdownWorkersHard(ParallelState *pstate)     * On Windows, send query cancels directly to the
workers'backends.  Use     * a critical section to ensure worker threads don't change state.     */
 
+    is_terminating = true; /* Silence workers */
+    EnterCriticalSection(&signal_info_lock);    for (i = 0; i < pstate->numWorkers; i++)    {
@@ -602,6 +625,15 @@ consoleHandler(DWORD dwCtrlType)    if (dwCtrlType == CTRL_C_EVENT ||        dwCtrlType ==
CTRL_BREAK_EVENT)   {
 
+          /* 
+         * We don't forcibly terminate workes. Instead, these
+         * variables make them silent ever after. This handler thread
+         * is regarded as the main thread.
+         */
+        is_terminating = true;
+        handlerThreadId = GetCurrentThreadId();
+        handler_active = true;
+        /* Critical section prevents changing data we look at here */        EnterCriticalSection(&signal_info_lock);
@@ -642,6 +674,8 @@ consoleHandler(DWORD dwCtrlType)        write_msg(NULL, "terminated by user\n");    }
+    handler_active = false;
+    /* Always return FALSE to allow signal handling to continue */    return FALSE;}
diff --git a/src/bin/pg_dump/parallel.h b/src/bin/pg_dump/parallel.h
index 21739ca..59fba33 100644
--- a/src/bin/pg_dump/parallel.h
+++ b/src/bin/pg_dump/parallel.h
@@ -64,11 +64,11 @@ typedef struct ParallelState#ifdef WIN32extern bool parallel_init_done;
-extern DWORD mainThreadId;
+extern bool is_terminating;#endifextern void init_parallel_dump_utils(void);
-
+extern bool am_main_thread(void);extern int    GetIdleWorker(ParallelState *pstate);extern bool
IsEveryWorkerIdle(ParallelState*pstate);extern void ListenToWorkers(ArchiveHandle *AH, ParallelState *pstate, bool
do_wait);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 01bf576..38c57bc 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -72,6 +72,15 @@ write_msg(const char *modulename, const char *fmt,...){    va_list        ap;
+#ifdef WIN32
+    /*
+     * On Windows, we don't forcibly terminate workers having ctrl-C
+     * entered. Instead, we sit silent.
+     */
+    if (is_terminating && !am_main_thread())
+      return;
+#endif
+    va_start(ap, fmt);    vwrite_msg(modulename, fmt, ap);    va_end(ap);
@@ -148,7 +157,7 @@ exit_nicely(int code)                                            on_exit_nicely_list[i].arg);#ifdef
WIN32
-    if (parallel_init_done && GetCurrentThreadId() != mainThreadId)
+    if (parallel_init_done && !am_main_thread())        _endthreadex(code);#endif

Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> At Fri, 03 Jun 2016 09:44:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <11515.1464961470@sss.pgh.pa.us>
>> I think that this one:
>>> If the target thread is allocating memory from the heap, the heap
>>> lock will not be released.
>> is potentially a hazard, which is why I made sure to use write_stderr
>> later on in the console interrupt handler.  Your original suggestion
>> to use write_msg would end up going through fprintf, which might well
>> use malloc internally.  (It's possible that Windows' version of write()
>> could too, I suppose, but that's probably as low-level as we are
>> going to get.)

> I have to admit that I forgot about the possible malloc's, and
> PQcancel() can be blocked from the same reason.

Uh, what?  PQcancel is very carefully coded so that it's safe to use
in a signal handler.  If it's doing mallocs someplace, that would be
surprising.

> If the issue to be settled here is the unwanted error messages,
> we could set a flag to instruct write_msg to sit silent. Anyway
> the workers should have been dead that time so discarding any
> error messages don't matter.
> What do you think about this?

This is really ugly and I'm unconvinced that it fixes anything.
write_msg is hardly the only place in a worker thread that might
be doing malloc's; moreover, preventing workers from entering it
after we start a shutdown does nothing for workers that might be
in it already.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Mon, 06 Jun 2016 11:12:14 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <17504.1465225934@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > At Fri, 03 Jun 2016 09:44:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <11515.1464961470@sss.pgh.pa.us>
> >> I think that this one:
> >>> If the target thread is allocating memory from the heap, the heap
> >>> lock will not be released.
> >> is potentially a hazard, which is why I made sure to use write_stderr
> >> later on in the console interrupt handler.  Your original suggestion
> >> to use write_msg would end up going through fprintf, which might well
> >> use malloc internally.  (It's possible that Windows' version of write()
> >> could too, I suppose, but that's probably as low-level as we are
> >> going to get.)
> 
> > I have to admit that I forgot about the possible malloc's, and
> > PQcancel() can be blocked from the same reason.
> 
> Uh, what?  PQcancel is very carefully coded so that it's safe to use
> in a signal handler.  If it's doing mallocs someplace, that would be
> surprising.

PQcancel is disigned to run in a signal handler on *Linux*, but
the discussion here is that the equivalent of send/recv and the
similars on Windows can be blocked by the TerminateThread'ed
thread via heap lock.

> > If the issue to be settled here is the unwanted error messages,
> > we could set a flag to instruct write_msg to sit silent. Anyway
> > the workers should have been dead that time so discarding any
> > error messages don't matter.
> > What do you think about this?
> 
> This is really ugly and I'm unconvinced that it fixes anything.
> write_msg is hardly the only place in a worker thread that might
> be doing malloc's; moreover, preventing workers from entering it
> after we start a shutdown does nothing for workers that might be
> in it already.

Yeah, it's ugly. But if we assume write() can be blocked, the
similar system calls used within PQcancel can be blocked, too. If
we don't assume so, using write instead of write_msg would do.

The problem I think is we don't have (or they don't offer?)
enough knowlegde about the inside of Windows APIs.


regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Tom Lane
Дата:
Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> At Mon, 06 Jun 2016 11:12:14 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <17504.1465225934@sss.pgh.pa.us>
>> Uh, what?  PQcancel is very carefully coded so that it's safe to use
>> in a signal handler.  If it's doing mallocs someplace, that would be
>> surprising.

> PQcancel is disigned to run in a signal handler on *Linux*, but
> the discussion here is that the equivalent of send/recv and the
> similars on Windows can be blocked by the TerminateThread'ed
> thread via heap lock.

What's your evidence for that?  I'd expect those to be kernel calls,
or whatever the equivalent concept is on Windows.  If they are not,
and in particular if they do malloc's, I think we have got problems
in many other contexts besides parallel dump.
        regards, tom lane



Re: Parallel pg_dump's error reporting doesn't work worth squat

От
Kyotaro HORIGUCHI
Дата:
At Tue, 07 Jun 2016 15:38:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <24181.1465328284@sss.pgh.pa.us>
> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:
> > At Mon, 06 Jun 2016 11:12:14 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <17504.1465225934@sss.pgh.pa.us>
> >> Uh, what?  PQcancel is very carefully coded so that it's safe to use
> >> in a signal handler.  If it's doing mallocs someplace, that would be
> >> surprising.
> 
> > PQcancel is disigned to run in a signal handler on *Linux*, but
> > the discussion here is that the equivalent of send/recv and the
> > similars on Windows can be blocked by the TerminateThread'ed
> > thread via heap lock.
> 
> What's your evidence for that?  I'd expect those to be kernel calls,
> or whatever the equivalent concept is on Windows.  If they are not,
> and in particular if they do malloc's, I think we have got problems
> in many other contexts besides parallel dump.

Well, I found that I misunderstood your following sentence.

> Your original suggestion to use write_msg would end up going
> through fprintf, which might well use malloc internally.  (It's
> possible that Windows' version of write() could too, I suppose,
> but that's probably as low-level as we are going to get.)

I took the sentence enclosed by parentheses as that write() or
other syscalls may blocked by heap-lock on Windows. But it should
mean that "We have no way than to regard Windows' version of
write() as a kernel call, that is, heap-lock safe". It seems
quite natural and I totally agree to the judgment.

Consequently, I agree to just use write(), not write_msg() and
consider the combination of PQcancel and TerminateThread safe on
Windows.

Sorry for my confusion.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center