Re: [GENERAL] Bottlenecks with large number of relation segment files
| От | KONDO Mitsumasa |
|---|---|
| Тема | Re: [GENERAL] Bottlenecks with large number of relation segment files |
| Дата | |
| Msg-id | 51FF5BC6.5000007@lab.ntt.co.jp обсуждение исходный текст |
| Ответ на | Bottlenecks with large number of relation segment files (Amit Langote <amitlangote09@gmail.com>) |
| Ответы |
Re: [GENERAL] Bottlenecks with large number of relation segment files
|
| Список | pgsql-hackers |
Hi Amit, (2013/08/05 15:23), Amit Langote wrote: > May the routines in fd.c become bottleneck with a large number of > concurrent connections to above database, say something like "pgbench > -j 8 -c 128"? Is there any other place I should be paying attention > to? What kind of file system did you use? When we open file, ext3 or ext4 file system seems to sequential search inode for opening file in file directory. And PostgreSQL limit FD 1000 per process. It seems too small. Please change src/backend/storage/file/fd.c at "max_files_per_process = 1000;" If we rewrite it, We can change limit of FD per process. I have already created fix-patch about this problem in postgresql.conf, and will submit next CF. Regards, -- Mitsumasa KONDO NTT Open Source Software Center
В списке pgsql-hackers по дате отправления: