Re: Analyzing foreign tables & memory problems

Поиск
Список
Период
Сортировка
От Albe Laurenz
Тема Re: Analyzing foreign tables & memory problems
Дата
Msg-id D960CB61B694CF459DCFB4B0128514C207D4F9DA@exadv11.host.magwien.gv.at
обсуждение исходный текст
Ответ на Re: Analyzing foreign tables & memory problems  ("Albe Laurenz" <laurenz.albe@wien.gv.at>)
Ответы Re: Analyzing foreign tables & memory problems  (Noah Misch <noah@leadboat.com>)
Список pgsql-hackers
I wrote:
> Noah Misch wrote:
>>> During ANALYZE, in analyze.c, functions compute_minimal_stats
>>> and compute_scalar_stats, values whose length exceed
>>> WIDTH_THRESHOLD (= 1024) are not used for calculating statistics
>>> other than that they are counted as "too wide rows" and assumed
>>> to be all different.
>>>
>>> This works fine with regular tables;

>>> With foreign tables the situation is different.  Even though
>>> values exceeding WIDTH_THRESHOLD won't get used, the complete
>>> rows will be fetched from the foreign table.  This can easily
>>> exhaust maintenance_work_mem.

>>> I can think of two remedies:
>>> 1) Expose WIDTH_THRESHOLD in commands/vacuum.h and add documentation
>>>    so that the authors of foreign data wrappers are aware of the
>>>    problem and can avoid it on their side.
>>>    This would be quite simple.

>> Seems reasonable.  How would the FDW return an indication that a
value was
>> non-NULL but removed due to excess width?
>
> The FDW would return a value of length WIDTH_THRESHOLD+1 that is
> long enough to be recognized as too long, but not long enough to
> cause a problem.

Here is a simple patch for that.

Yours,
Laurenz Albe

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Hannu Krosing
Дата:
Сообщение: Re: JSON in 9.2 - Could we have just one to_json() function instead of two separate versions ?
Следующее
От: Jeroen Vermeulen
Дата:
Сообщение: Re: extending relations more efficiently