Re: Hitting the nfile limit

Поиск
Список
Период
Сортировка
От Michael Brusser
Тема Re: Hitting the nfile limit
Дата
Msg-id DEEIJKLFNJGBEMBLBAHCEEKJDFAA.michael@synchronicity.com
обсуждение исходный текст
Ответ на Re: Hitting the nfile limit  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
> > I wonder how Postgres handles this situation.
> > (Or power outage, or any hard system fault, at this point)
> 
> Theoretically we should be able to recover from this without loss of
> committed data (assuming you were running with fsync on).  Is your QA
> person certain that the record in question had been written by a
> successfully-committed transaction?
> 
He's saying that his test script did not write any new records, only
updated existing ones. 
My uneducated guess on how update may work:
- create a clone record from the one to be updated and update some field(s) with given values.
- write new record to the database and delete the original.

If this is the case, could it be that somewhere along these lines
postgres ran into problem and lost the record completely?
But all this should be done in a transaction, so... I don't know...


As for fsync, we currently go with whatever default value is,
same for wal_sync_method.
Does anyone has an estimate on performance penalty related to
turning fsync on?

Michael.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruno Wolff III
Дата:
Сообщение: Re: Compile error in current cvs (~1230 CDT July 4)
Следующее
От: Tom Lane
Дата:
Сообщение: Proof-of-concept for initdb-time shared_buffers selection