Re: log chunking broken with large queries under load
| От | Andrew Dunstan |
|---|---|
| Тема | Re: log chunking broken with large queries under load |
| Дата | |
| Msg-id | 4F79B817.2080005@dunslane.net обсуждение исходный текст |
| Ответ на | log chunking broken with large queries under load (Andrew Dunstan <andrew@dunslane.net>) |
| Ответы |
Re: log chunking broken with large queries under load
|
| Список | pgsql-hackers |
On 04/01/2012 06:34 PM, Andrew Dunstan wrote: > Some of my PostgreSQL Experts colleagues have been complaining to me > that servers under load with very large queries cause CSV log files > that are corrupted, because lines are apparently multiplexed. The log > chunking protocol between the errlog routines and the syslogger is > supposed to prevent that, so I did a little work to try to reproduce > it in a controlled way. Well, a little further digging jogged my memory a bit. It looks like we underestimated the amount of messages we might get as more than one chunk fairly badly. We could just increase CHUNK_SLOTS in syslogger.c, but I opted instead to stripe the slots with a two dimensional array, so we didn't have to search a larger number of slots for any given message. See the attached patch. I'm not sure how much we want to scale this up. I set CHUNK_STRIPES to 20 to start with, and I've asked some colleagues with very heavy log loads with very large queries to test it out if possible. If anyone else has a similar load I'd appreciate similar testing. cheers andrew
Вложения
В списке pgsql-hackers по дате отправления: