Re: pg_dump out of shared memory

Поиск
Список
Период
Сортировка
От tfo@alumni.brown.edu (Thomas F. O'Connell)
Тема Re: pg_dump out of shared memory
Дата
Msg-id 80c38bb1.0406210707.50894a15@posting.google.com
обсуждение исходный текст
Ответ на pg_dump out of shared memory  (tfo@alumni.brown.edu (Thomas F. O'Connell))
Ответы Re: pg_dump out of shared memory  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
tfo@alumni.brown.edu (Thomas F. O'Connell) wrote in message news:
> postgresql.conf just has the default of 1000 shared_buffers. The
> database itself has thousands of tables, some of which have rows
> numbering in the millions. Am I correct in thinking that, despite the
> hint, it's more likely that I need to up the shared_buffers?

So the answer here, verified by Tom Lane and my own remedy to the
problem, is "no". Now I'm curious: why does pg_dump require that
max_connections * max_shared_locks_per_transaction be greater than the
number of objects in the database? Or if that's not the right
assumption about how pg_dump is working, how does pg_dump obtain its
locks, and why is the error that it runs out of shared memory? Is
there a portion of shared memory that's set aside for locks? What is
the shared lock table?

-tfo

В списке pgsql-general по дате отправления:

Предыдущее
От: Adam Smith
Дата:
Сообщение: Re: [ADMIN] Is this a "Stupid Question" ?
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: ERROR: tables can have at most 1600 columns