Re: 500 requests per second

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: 500 requests per second
Дата
Msg-id b42b73150705211250w61f9737ele7c498d8e1517ed3@mail.gmail.com
обсуждение исходный текст
Ответ на 500 requests per second  (Tarhon-Onu Victor <mituc@iasi.rdsnet.ro>)
Ответы Re: 500 requests per second  (PFC <lists@peufeu.com>)
Re: 500 requests per second  ("Jim C. Nasby" <decibel@decibel.org>)
Re: 500 requests per second  (Dave Cramer <pg@fastcrypt.com>)
Список pgsql-performance
On 5/12/07, Tarhon-Onu Victor <mituc@iasi.rdsnet.ro> wrote:
>
>         Hi guys,
>
>         I'm looking for a database+hardware solution which should be able
> to handle up to 500 requests per second. The requests will consist in:
>         - single row updates in indexed tables (the WHERE clauses will use
> the index(es), the updated column(s) will not be indexed);
>         - inserts in the same kind of tables;
>         - selects with approximately the same WHERE clause as the update
> statements will use.
>         So nothing very special about these requests, only about the
> throughput.
>
>         Can anyone give me an idea about the hardware requirements, type
> of
> clustering (at postgres level or OS level), and eventually about the OS
> (ideally should be Linux) which I could use to get something like this in
> place?

I work on a system about like you describe....400tps constant....24/7.
 Major challenges are routine maintenance and locking.  Autovacuum is
your friend but you will need to schedule a full vaccum once in a
while because of tps wraparound.  If you allow AV to do this, none of
your other tables get vacuumed until it completes....heh!

If you lock the wrong table, transactions will accumulate rapidly and
the system will grind to a halt rather quickly (this can be mitigated
somewhat by smart code on the client).

Other general advice:
* reserve plenty of space for WAL and keep volume separate from data
volume...during a long running transaction WAL files will accumulate
rapidly and panic the server if it runs out of space.
* set reasonable statement timeout
* backup with pitr.  pg_dump is a headache on extremely busy servers.
* get good i/o system for your box.  start with 6 disk raid 10 and go
from there.
* spend some time reading about bgwriter settings, commit_delay, etc.
* keep an eye out for postgresql hot (hopefully coming with 8.3) and
make allowances for it in your design if possible.
* normalize your database and think of vacuum as dangerous enemy.

good luck! :-)

merlin

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Merlin Moncure"
Дата:
Сообщение: Re: Increasing Shared_buffers = slow commits?
Следующее
От: "Y Sidhu"
Дата:
Сообщение: Re: pg_stats how-to?