Fwd: Using Postgres to store high volume streams of sensor readings

Поиск
Список
Период
Сортировка
От Ciprian Dorin Craciun
Тема Fwd: Using Postgres to store high volume streams of sensor readings
Дата
Msg-id 8e04b5820811220740t6b303e64x25edce0c39d4e707@mail.gmail.com
обсуждение исходный текст
Ответ на Using Postgres to store high volume streams of sensor readings  ("Ciprian Dorin Craciun" <ciprian.craciun@gmail.com>)
Список pgsql-general
(I'm adding the discussion also to the Postgres list.)

On Fri, Nov 21, 2008 at 11:19 PM, Dann Corbit <DCorbit@connx.com> wrote:
> What is the schema for your table?
> If you are using copy rather than insert, 1K rows/sec for PostgreSQL seems very bad unless the table is extremely
wide.

   The schema is posted at the beginning of the thread. But in short
it is a table with 4 columns: client, sensor, timestamp and value, all
beeing int4 (integer). There is only one (compound) index on the
client and sensor...

   I gues the problem is from the index...


> Memory mapped database systems may be the answer to your need for speed.
> If you have a single inserting process, you can try FastDB, but unless you use a 64 bit operating system and
compiler,you will be limited to 2 GB file size.  FastDB is single writer, multiple reader model.  See: 
> http://www.garret.ru/databases.html
>
> Here is output from the fastdb test program testperf, when compiled in 64 bit mode (the table is ultra-simple with
onlya string key and a string value, with also a btree and a hashed index on key): 
> Elapsed time for inserting 1000000 record: 8 seconds
> Commit time: 1
> Elapsed time for 1000000 hash searches: 1 seconds
> Elapsed time for 1000000 index searches: 4 seconds
> Elapsed time for 10 sequential search through 1000000 records: 2 seconds
> Elapsed time for search with sorting 1000000 records: 3 seconds
> Elapsed time for deleting all 1000000 records: 0 seconds
>
> Here is a bigger set so you can get an idea about scaling:
>
> Elapsed time for inserting 10000000 record: 123 seconds
> Commit time: 13
> Elapsed time for 10000000 hash searches: 10 seconds
> Elapsed time for 10000000 index searches: 82 seconds
> Elapsed time for 10 sequential search through 10000000 records: 8 seconds
> Elapsed time for search with sorting 10000000 records: 41 seconds
> Elapsed time for deleting all 10000000 records: 4 seconds
>
> If you have a huge database, then FastDB may be problematic because you need free memory equal to the size of your
database.
> E.g. a 100 GB database needs 100 GB memory to operate at full speed.  In 4GB allotments, at $10-$50/GB 100 GB costs
between$1000 and $5000. 

   Unfortunately the database will be too large (eventually) to store
all of it inside the memory...

   For the moment, I don't think I'll be able to try FastDB... Il put
it on my reminder list...


> MonetDB is worth a try, but I had trouble getting it to work properly on 64 bit Windows:
> http://monetdb.cwi.nl/

   I've heard of MonetDB -- it's from the same family as
Hypertable... Maybe I'll give it a try after I finish with SQLlite...

   Ciprian Craciun.

В списке pgsql-general по дате отправления:

Предыдущее
От: "Ciprian Dorin Craciun"
Дата:
Сообщение: Re: Using Postgres to store high volume streams of sensor readings
Следующее
От: Raymond O'Donnell
Дата:
Сообщение: Re: SQL query