Re: Simple (hopefully) throughput question?

Поиск
Список
Период
Сортировка
От Marti Raudsepp
Тема Re: Simple (hopefully) throughput question?
Дата
Msg-id AANLkTik9sP6cNubT1VVW+qEJ2cRxz7TEEP96sYZMbRSX@mail.gmail.com
обсуждение исходный текст
Ответ на Simple (hopefully) throughput question?  (Nick Matheson <Nick.D.Matheson@noaa.gov>)
Ответы Re: Simple (hopefully) throughput question?  (Nick Matheson <Nick.D.Matheson@noaa.gov>)
Список pgsql-performance
Just some ideas that went through my mind when reading your post.

On Wed, Nov 3, 2010 at 17:52, Nick Matheson <Nick.D.Matheson@noaa.gov> wrote:
> than observed raw disk reads (5 MB/s versus 100 MB/s).  Part of this is
> due to the storage overhead we have observed in Postgres.  In the
> example below, it takes 1 GB to store 350 MB of nominal data.

PostgreSQL 8.3 and later have 22 bytes of overhead per row, plus
page-level overhead and internal fragmentation. You can't do anything
about row overheads, but you can recompile the server with larger
pages to reduce page overhead.

> Is there any way using stored procedures (maybe C code that calls
> SPI directly) or some other approach to get close to the expected 35
> MB/s doing these bulk reads?

Perhaps a simpler alternative would be writing your own aggregate
function with four arguments.

If you write this aggregate function in C, it should have similar
performance as the sum() query.

Regards,
Marti

В списке pgsql-performance по дате отправления:

Предыдущее
От: Heikki Linnakangas
Дата:
Сообщение: Re: Simple (hopefully) throughput question?
Следующее
От: Andy Colson
Дата:
Сообщение: Re: Simple (hopefully) throughput question?