Re: Load distributed checkpoint

Поиск
Список
Период
Сортировка
От Takayuki Tsunakawa
Тема Re: Load distributed checkpoint
Дата
Msg-id 020801c72591$50b8c340$19527c0a@OPERAO
обсуждение исходный текст
Ответ на Re: Load distributed checkpoint  ("Zeugswetter Andreas ADI SD" <ZeugswetterA@spardat.at>)
Ответы Re: Load distributed checkpoint  (ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp>)
Re: Load distributed checkpoint  ("Zeugswetter Andreas ADI SD" <ZeugswetterA@spardat.at>)
Список pgsql-hackers
Hello, Itagaki-san,

Thank you for an interesting piece of information.

From: "ITAGAKI Takahiro" <itagaki.takahiro@oss.ntt.co.jp>
> If you use linux, try the following settings:
>  1. Decrease /proc/sys/vm/dirty_ratio and dirty_background_ratio.
>  2. Increase wal_buffers to redule WAL flushing.
>  3. Set wal_sync_method to open_sync; O_SYNC is faster then fsync().
>  4. Separate data and WAL files into different partitions or disks.
>
> I suppose 1 is important for you, because kernel will not write
dirty
> buffers until 10% of buffers become dirty in default settings.
> You have large memory (8GB), but small data set (800MB). So kernel
> almost never writes buffers not in checkpoints. Accumulate dirty
buffers
> are written at a burst in fsync().

I'll show the results of this tuning to share information with people
who don't have experience of this kind.
The numbers shown below are the tps when running "pgbench -c16 -t100
postgres" five times in succession.

(1) Default case(this is show again for comparison and reminder)
The bgwriter_* and checkpoint_* are set to those defaults.
wal_buffers and wal_sync_method are also set to those defaults (64kB
and fdatasync respectively.)

235  80  226  77  240


(2) Default + WAL 1MB case
The configuration is the same as case (1) except that wal_buffers is
set to 1024kB.

302  328  82  330  85

This is better improvement than I expected.


(3) Default + wal_sync_method=open_sync case
The configuration is the same as case (1) except that wal_sync_method
is set to open_sync.

162  67  176  67  164

Too bad compared to case (2).  Do you know the reason?


(4) (2)+(3) case

322  350  85  321  84

This is good, too.


(5) (4) + /proc/sys/vm/dirty* tuning
dirty_background_ratio is changed from 10 to 1, and dirty_ratio is
changed from 40 to 4.

308  349  84  349  84

The tuning of kernel cache doesn't appear to bring performance
improvement in my env.  The kernel still waits too long before it
starts flushing dirty buffers because the cache is large?  If so,
increasingly available RAM may cause trouble more frequently in the
near future.  Do the dirty_*_ratio accept values less than 1?

BTW, in case (1), the best response time of a transaction was 6
milliseconds.  On the other hand, the worst response time was 13
seconds.


> We would be happy if we would be free from a difficult combination
> of tuning. If you have *idea for improvements*, please suggest it.
> I think we've already understood *problem itself*.

I agree with you.  Let's make the ideas more concrete, doing some
experimentations.





В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Jim Nasby"
Дата:
Сообщение: Re: Interface for pg_autovacuum
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Interface for pg_autovacuum