Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters
Дата
Msg-id CA+TgmobogBLHTnKSNgzM6QET30S+_KuU2JfVg43ztS5jTCqMxg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters  ("Andrea Urbani" <matfanjol@mail.com>)
Ответы Re: [HACKERS] [ patch ] pg_dump: new --custom-fetch-table and--custom-fetch-value parameters  ("Andrea Urbani" <matfanjol@mail.com>)
Список pgsql-hackers
On Fri, Jan 20, 2017 at 12:52 AM, Andrea Urbani <matfanjol@mail.com> wrote:
> I have used "custom" parameters because I want to decrease the fetch size only on the tables with big bloab fields.
Ifwe remove the "custom-fetch-table" parameter and we provide only the "fetch-size" parameter all the tables will use
thenew fetch size and the execution time will be slower (according to my few tests). But just "fetch-size" will be
fasterto use and maybe more clear. 
> Well, how to go on? I add it to the commitfest and somebody will decide and fix it?

OK, so I think the idea is that --custom-fetch-size affects only the
tables mentioned in --custom-fetch-table.  I understand why you want
to do it that way but it's kind of messy.  Suppose somebody else comes
along and wants to customize some other thing for some other set of
tables.  Then we'll have --custom2-otherthing and --custom2-tables?
Blech.

Interestingly, this isn't the first attempt to solve a problem of this
type.  Kyotaro Horiguchi ran into a similar issue with postgres_fdw
trying to fetch too much data at once from a remote server:

https://www.postgresql.org/message-id/20150122.192739.164180273.horiguchi.kyotaro%40lab.ntt.co.jp

In the end, all that got done there was a table-level-configurable
fetch limit, and we could do the same thing here (e.g. by adding a
dummy storage parameter that only pg_dump uses).  But I think what we
really ought to do is what Kyotaro Horiguchi proposed originally: have
a way to limit the FETCH command to a certain number of bytes.  If
that number of bytes is exceeded, the FETCH stops after that row even
if the number of rows the user requested isn't fulfilled yet.  The
user can FETCH again if they want more.

Tom wasn't a big fan of this idea, but I thought it was clever and
still do.  And it's undeniable that it provides a much better solution
to this problem than forcing the user to manually tweak the fetch size
based on their installation-specific knowledge of which tables have
blobs large enough that returning 100 rows at a time will exhaust the
local server's memory.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: [HACKERS] [COMMITTERS] pgsql: Reindent table partitioning code.
Следующее
От: Konstantin Knizhnik
Дата:
Сообщение: Re: [HACKERS] Deadlock in XLogInsert at AIX