Re: [HACKERS] Bottlenecks with large number of relation segment files

Поиск
Список
Период
Сортировка
От KONDO Mitsumasa
Тема Re: [HACKERS] Bottlenecks with large number of relation segment files
Дата
Msg-id 5200CDBD.2020405@lab.ntt.co.jp
обсуждение исходный текст
Ответ на Re: Bottlenecks with large number of relation segment files  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: [HACKERS] Bottlenecks with large number of relation segment files  (Andres Freund <andres@2ndquadrant.com>)
Список pgsql-general
(2013/08/05 21:23), Tom Lane wrote:
> Andres Freund <andres@2ndquadrant.com> writes:
>> ... Also, there are global
>> limits to the amount of filehandles that can simultaneously opened on a
>> system.
>
> Yeah.  Raising max_files_per_process puts you at serious risk that
> everything else on the box will start falling over for lack of available
> FD slots.
Is it Really? When I use hadoop like NOSQL storage, I set large number of FD.
Actually, Hadoop Wiki is writing following.

http://wiki.apache.org/hadoop/TooManyOpenFiles
> Too Many Open Files
>
> You can see this on Linux machines in client-side applications, server code or even in test runs.
> It is caused by per-process limits on the number of files that a single user/process can have open, which was
introducedin the 2.6.27 kernel. The default value, 128, was chosen because "that should be enough". 
>
> In Hadoop, it isn't.
~
> ulimit -n 8192

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center


В списке pgsql-general по дате отправления:

Предыдущее
От: KONDO Mitsumasa
Дата:
Сообщение: Re: Bottlenecks with large number of relation segment files
Следующее
От: KONDO Mitsumasa
Дата:
Сообщение: Re: Bottlenecks with large number of relation segment files