Re: log chunking broken with large queries under load
| От | Tom Lane |
|---|---|
| Тема | Re: log chunking broken with large queries under load |
| Дата | |
| Msg-id | 6290.1333382414@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: log chunking broken with large queries under load (Andrew Dunstan <andrew@dunslane.net>) |
| Ответы |
Re: log chunking broken with large queries under load
|
| Список | pgsql-hackers |
Andrew Dunstan <andrew@dunslane.net> writes:
> On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
>> Some of my PostgreSQL Experts colleagues have been complaining to me
>> that servers under load with very large queries cause CSV log files
>> that are corrupted,
> We could just increase CHUNK_SLOTS in syslogger.c, but I opted instead
> to stripe the slots with a two dimensional array, so we didn't have to
> search a larger number of slots for any given message. See the attached
> patch.
This seems like it isn't actually fixing the problem, only pushing out
the onset of trouble a bit. Should we not replace the fixed-size array
with a dynamic data structure?
regards, tom lane
В списке pgsql-hackers по дате отправления: