loading increase into huge table with 50.000.000 records

Поиск
Список
Период
Сортировка
От nuggets72@free.fr
Тема loading increase into huge table with 50.000.000 records
Дата
Msg-id 1153928087.44c78b975f3a7@imp3-g19.free.fr
обсуждение исходный текст
Ответы Re: loading increase into huge table with 50.000.000 records
Re: loading increase into huge table with 50.000.000 records
Список pgsql-performance
Hello,
Sorry for my poor english,

My problem :

I meet some performance problem during load increase.

massive update of 50.000.000 records and 2.000.000 insert with a weekly
frequency in a huge table (+50.000.000 records, ten fields, 12 Go on hard disk)

current performance obtained : 120 records / s
At the beginning, I got a better speed : 1400 records/s


CPU : bi xeon 2.40GHz (cache de 512KB)
postgresql version : 8.1.4
OS : debian Linux sa 2.6.17-mm2
Hard disk scsi U320 with scsi card U160 on software RAID 1
Memory : only 1 Go at this time.


My database contains less than ten tables. But the main table takes more than 12
Go on harddisk. This table has got ten text records and two date records.

I use few connection on this database.

I try many ideas :
- put severals thousands operations into transaction (with BEGIN and COMMIT)
- modify parameters in postgres.conf like
    shared_buffers (several tests with 30000 50000 75000)
    fsync = off
    checkpoint_segments = 10 (several tests with 20 - 30)
    checkpoint_timeout = 1000 (30-1800)
    stats_start_collector = off

    unfortunately, I can't use another disk for pg_xlog file.


But I did not obtain a convincing result



My program does some resquest quite simple.
It does some
UPDATE table set dat_update=current_date where id=XXXX ;
And if not found
id does some
insert into table


My sysadmin tells me write/read on hard disk aren't the pb (see with iostat)


Have you got some idea to increase performance for my problem ?

Thanks.

Larry.

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Andrew Hammond"
Дата:
Сообщение: Re: Partitioned tables in queries
Следующее
От: Sven Geisler
Дата:
Сообщение: Re: loading increase into huge table with 50.000.000 records