Re: High Frequency Inserts to Postgres Database vs Writing to a File

Поиск
Список
Период
Сортировка
От Jay Manni
Тема Re: High Frequency Inserts to Postgres Database vs Writing to a File
Дата
Msg-id 60B0F2124D07B942988329B5B7CA393D01E5B94187@mail2.FireEye.com
обсуждение исходный текст
Ответ на Re: High Frequency Inserts to Postgres Database vs Writing to a File  (Craig Ringer <craig@postnewspapers.com.au>)
Список pgsql-performance
Thanks to all for the responses. Based on all the recommendations, I am going to try a batched commit approach; along
withdata purging policies so that the data storage does not grow beyond certain thresholds. 

- J

-----Original Message-----
From: Craig Ringer [mailto:craig@postnewspapers.com.au]
Sent: Wednesday, November 04, 2009 5:12 PM
To: Merlin Moncure
Cc: Jay Manni; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High Frequency Inserts to Postgres Database vs Writing to a File

Merlin Moncure wrote:

> Postgres can handle multiple 1000 insert/sec but your hardware most
> likely can't handle multiple 1000 transaction/sec if fsync is on.

commit_delay or async commit should help a lot there.

http://www.postgresql.org/docs/8.3/static/wal-async-commit.html
http://www.postgresql.org/docs/8.3/static/runtime-config-wal.html

Please do *not* turn fsync off unless you want to lose your data.

> If you are bulk inserting 1000+ records/sec all day long, make sure
> you have provisioned enough storage for this (that's 86M records/day),

plus any index storage, room for dead tuples if you ever issue UPDATEs, etc.

--
Craig Ringer

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


В списке pgsql-performance по дате отправления:

Предыдущее
От: Scott Carey
Дата:
Сообщение: Re: Optimizer + bind variables
Следующее
От: Eduardo Morras
Дата:
Сообщение: Re: Followup: vacuum'ing toast