Re: Using Postgres to store high volume streams of sensor readings

Поиск
Список
Период
Сортировка
От Ciprian Dorin Craciun
Тема Re: Using Postgres to store high volume streams of sensor readings
Дата
Msg-id 8e04b5820811210851q6a289bf9w5e232aacf4c58a5a@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Using Postgres to store high volume streams of sensor readings  (Rafael Martinez <r.m.guerrero@usit.uio.no>)
Список pgsql-general
    Thank's for your info! Please see below...


On Fri, Nov 21, 2008 at 4:14 PM, Rafael Martinez
<r.m.guerrero@usit.uio.no> wrote:
> Ciprian Dorin Craciun wrote:
> [............]
>>
>>     So what can I do / how could I optimize the use of Postgres for this usage?
>>
>
> Hello, here you have some comments that will probably help you to get
> more from this test machine ......
>
>>
>>     * test machine: Linux (Ubuntu 8.04 x64), IBM x3750, 2 x 500Gb,
>> SCSI, RAID 0, 8Gb RAM, 2 x Intel Xeon 5160 x 2 core (3Ghz, 4Mb L2);
>>     * Postgres version: 8.3.3;
>>
>
> (RAID 0 is never a good thing with databases if you don't have another
> redundant system that can be used to restore your data or if you want to
> minimize your downtime.)
>
> Putting the database transaction logs ($PGDATA/pg_xlog) on its own
> dedicated disk resource will probably increase write performace.

    Unfortunately this is a test machine shared with other projects,
and I can't change (for now) the disk setup... When I'll have a
dedicated machine I'll do this... For now nop...

>>> create index sds_benchmark_data__client_sensor__index on sds_benchmark_data (client, sensor);
>>
>
> You don't need this index if the primary key is (client, sensor, timestamp).

    In both the primary key and the index are listed here because when
I've tested I have switched between them... (Almost the same behaviour
with only the primary key, or with only the index)...


>>> shared_buffers = 24MB
>
> I would increase this to 25% of your RAM. 2GB in the test machine (if it
> is a dedicated postgres server). It will help read-rate.
>
> You will probably have to increase kernel.shmmax and kernel.shmall in
> /etc/sysctl.conf (linux)

    Modified it.


>>> fsync = off
>
> Do you have the results with this on?

    Doesn't help at all... I guest the problem is with the index building...


>>> checkpoint_segments = 32
>
> I would increase this to 128-256 if you work with large write loads
> (several GB of data). $PGDATA/pg_xlog would use some extra disk if you
> change this value.

    Updated it to 256.


>>> effective_cache_size = 1024MB
>>
>
> 50% of your RAM. 4GB in the test machine (if it is a dedicated postgres
> server). It will probably help read-rate.

    Updated it to 4096MB.


> In addition, I will try to change these parameters also:
>
> wal_buffers = 64
> random_page_cost = 2.0

    Currently wal_buffers is 64kB, I've set it to 64MB???

    random_page_cost was 4.0, decreased it to 2.0???


> In general, faster and more disks in a RAID 1+0 / 0+1 will help write
> performace. autovacuum should be on.
>
> regards.
> --
>  Rafael Martinez, <r.m.guerrero@usit.uio.no>
>  Center for Information Technology Services
>  University of Oslo, Norway
>
>  PGP Public Key: http://folk.uio.no/rafael/


    So after the updates, the results were better, but still under 1k
inserts / second...

    Thanks again for your info!

В списке pgsql-general по дате отправления:

Предыдущее
От: "Jonah H. Harris"
Дата:
Сообщение: Re: converter pgplsql funcion
Следующее
От: "Ciprian Dorin Craciun"
Дата:
Сообщение: Re: Using Postgres to store high volume streams of sensor readings