Decreasing BLKSZ

Поиск
Список
Период
Сортировка
От Marc Morin
Тема Decreasing BLKSZ
Дата
Msg-id 2BCEB9A37A4D354AA276774EE13FB8C201130680@mailserver.sandvine.com
обсуждение исходный текст
Ответы Re: Decreasing BLKSZ
Список pgsql-performance
Our application has a number of inserters posting rows of network statistics into a database.  This is occuring continously.  The following is an example of a stats table (simplified but maintains key concepts).
 
 
CREATE TABLE stats
(
      logtime timestamptz,
      key     int,
      stat1   bigint,
      stat2   bigint,
      stat3   bigint,
      PRIMARY KEY (key,logtime)
);
CREATE INDEX x ON stats(logtime);
 
There are on the order of 1M unique values for "key" and a new row for each key value will be inserted say every 15 minutes.  These rows are divided up between a number of different inserting elements, but that isn't relevant.
 
The problem is, the insert pattern has low correlation with the (key,logtime) index.   In this case, would need >1M blocks in my shared_buffer space to prevent a read-modify-write type of pattern happening during the inserts (given a large enough database).
 
Wondering about lowering the BLKSZ value so that the total working set of blocks required can be maintained in my shared buffers.  Our database only has 8G of memory and likely need to reduce BLKSZ to 512....
 
Any comment on other affects or gotchas with lowering the size of BLKSZ?  Currently, our database is thrashing its cache of blocks we we're getting only ~100 inserts/second, every insert results in a evict-read-modify operation.
 
 
Ideally, like to keep the entire working set of blocks in memory across insert periods so that the i/o looks more like write full blocks....
 
Thanks
Marc
 
 

В списке pgsql-performance по дате отправления:

Предыдущее
От: yoav x
Дата:
Сообщение: Re: PostgreSQL and sql-bench
Следующее
От: Markus Schaber
Дата:
Сообщение: Re: Decreasing BLKSZ