Re: Simple (hopefully) throughput question?

Поиск
Список
Период
Сортировка
От Pierre C
Тема Re: Simple (hopefully) throughput question?
Дата
Msg-id op.vlowvbdreorkce@apollo13
обсуждение исходный текст
Ответ на Re: Simple (hopefully) throughput question?  (Nick Matheson <Nick.D.Matheson@noaa.gov>)
Список pgsql-performance
On Thu, 04 Nov 2010 15:42:08 +0100, Nick Matheson
<Nick.D.Matheson@noaa.gov> wrote:
> I think your comments really get at what our working hypothesis was, but
> given that our experience is limited compared to you all here on the
> mailing lists we really wanted to make sure we weren't missing any
> alternatives. Also the writing of custom aggregators will likely
> leverage any improvements we make to our storage throughput.

Quick test : SELECT sum(x) FROM a table with 1 INT column, 3M rows, cached
=> 244 MB/s
=> 6.7 M rows/s

Same on MySQL :

          size    SELECT sum(x) (cached)
postgres 107 MB  0.44 s
myisam   20 MB   0.42 s
innodb   88 MB   1.98 s

As you can see, even though myisam is much smaller (no transaction data to
store !) the aggregate performance isn't any better, and for innodb it is
much worse.

Even though pg's per-row header is large, seq scan / aggregate performance
is very good.

You can get performance in this ballpark by writing a custom aggregate in
C ; it isn't very difficult, the pg source code is clean and full of
insightful comments.

- take a look at how contrib/intagg works
- http://www.postgresql.org/files/documentation/books/aw_pgsql/node168.html
- and the pg manual of course

В списке pgsql-performance по дате отправления:

Предыдущее
От: Nick Matheson
Дата:
Сообщение: Re: Simple (hopefully) throughput question?
Следующее
От: A B
Дата:
Сообщение: Running PostgreSQL as fast as possible no matter the consequences