Re: Using Postgres to store high volume streams of sensor readings

Поиск
Список
Период
Сортировка
От Alvaro Herrera
Тема Re: Using Postgres to store high volume streams of sensor readings
Дата
Msg-id 20081122223232.GE3813@alvh.no-ip.org
обсуждение исходный текст
Ответ на Re: Using Postgres to store high volume streams of sensor readings  (Michal Szymanski <dyrex@poczta.onet.pl>)
Ответы Re: Using Postgres to store high volume streams of sensor readings  ("Ciprian Dorin Craciun" <ciprian.craciun@gmail.com>)
Список pgsql-general
> On 21 Lis, 13:50, ciprian.crac...@gmail.com ("Ciprian Dorin Craciun")
> wrote:

> >     What have I observed / tried:
> >     * I've tested without the primary key and the index, and the
> > results were the best for inserts (600k inserts / s), but the
> > readings, worked extremly slow (due to the lack of indexing);
> >     * with only the index (or only the primary key) the insert rate is
> > good at start (for the first 2 million readings), but then drops to
> > about 200 inserts / s;

I didn't read the thread so I don't know if this was suggested already:
bulk index creation is a lot faster than retail index inserts.  Maybe
one thing you could try is to have an unindexed table to do the inserts,
and a separate table that you periodically truncate, refill with the
contents from the other table, then create index.  Two main problems: 1.
querying during the truncate/refill/reindex process (you can solve it by
having a second table that you "rename in place"); 2. the query table is
almost always out of date.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

В списке pgsql-general по дате отправления:

Предыдущее
От: Alvaro Herrera
Дата:
Сообщение: Re: Using Postgres to store high volume streams of sensor readings
Следующее
От: "blackwater dev"
Дата:
Сообщение: date stamp on update?