Re: hundreds of millions row dBs

Поиск
Список
Период
Сортировка
От Pierre-Frédéric Caillaud
Тема Re: hundreds of millions row dBs
Дата
Msg-id opsj3sk0f7cq72hf@musicbox
обсуждение исходный текст
Ответ на Re: hundreds of millions row dBs  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
    To speed up load :
    - make less checkpoints (tweak checkpoint interval and other parameters
in config)
    - disable fsync (not sure if it really helps)
    - have source data, database tables, and log on three physically
different disks
    - have the temporary on a different disk too, or in ramdisk
    - gunzip while restoring to read less data from the disk



> "Dann Corbit" <DCorbit@connx.com> writes:
>> Here is an instance where a really big ram disk might be handy.
>> You could create a database on a big ram disk and load it, then build
>> the indexes.
>> Then shut down the database and move it to hard disk.
>
> Actually, if you have a RAM disk, just change the
> $PGDATA/base/nnn/pgsql_tmp
> subdirectory into a symlink to some temp directory on the RAM disk.
> Should get you pretty much all the win with no need to move stuff around
> afterwards.
>
> You have to be sure the RAM disk is bigger than your biggest index
> though.
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>



В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: vacuum is failing
Следующее
От: Lonni J Friedman
Дата:
Сообщение: Re: vacuum is failing