Load experimentation

Поиск
Список
Период
Сортировка
Hello All,

I'm in the process of loading a massive amount of data (500 GB). After
some initial timings, I'm looking at 260 hours to load the entire 500GB.
10 days seems like an awfully long time so I'm searching for ways to
speed this up. The load is happening in the Amazon cloud (EC2), on a
m1.large instance:
-7.5 GB memory
-4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
-64-bit platform


So far I have modified my postgresql.conf  file (PostgreSQL 8.1.3). The
modifications I have made are as follows:

shared_buffers = 786432
work_mem = 10240
maintenance_work_mem = 6291456
max_fsm_pages = 3000000
wal_buffers = 2048
checkpoint_segments = 200
checkpoint_timeout = 300
checkpoint_warning = 30
autovacuum = off


There are a variety of instance types available in the Amazon cloud
(http://aws.amazon.com/ec2/instance-types/), including high memory and
high CPU. High memory instance types come with 34GB or 68GB of memory.
High CPU instance types have a lot less memory (7GB max) but up to 8
virtual cores. I am more than willing to change to any of the other
instance types.

Also, there is nothing else happening on the loading server. It is
completely dedicated to the load.

Any advice would be greatly appreciated.

Thanks,

Ben

В списке pgsql-performance по дате отправления:

Предыдущее
От: Scott Marlowe
Дата:
Сообщение: Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX, Autovacuum
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Load experimentation