I have a number of clients that retain large numbers of small transactions. This could easily hit 4B+ at some sites. Most rows would be 1k max, some may exceed that, average would be between 512bytes & 1k. Is this feasible w/Postgres? thanks, Joshua
"Joshua Schmidlkofer" <menion@srci.iwpsd.org> writes:
> I have a number of clients that retain large numbers of small
> transactions. This could easily hit 4B+ at some sites. Most rows would
> be 1k max, some may exceed that, average would be between 512bytes & 1k.
This would be a problem at the moment. I'm expecting to see some sort
of fix for it in 7.2, however. The simplest fix method would require a
complete-database VACUUM at least once every billion or so transactions;
but with the nonintrusive VACUUM that we're planning for 7.2, that
doesn't seem overly onerous.
regards, tom lane
Сайт использует файлы cookie для корректной работы и повышения удобства. Нажимая кнопку «Принять» или продолжая пользоваться сайтом, вы соглашаетесь на их использование в соответствии с Политикой в отношении обработки cookie ООО «ППГ», в том числе на передачу данных из файлов cookie сторонним статистическим и рекламным службам. Вы можете управлять настройками cookie через параметры вашего браузера