Re: postgres performance tunning

Поиск
Список
Период
Сортировка
От Marti Raudsepp
Тема Re: postgres performance tunning
Дата
Msg-id AANLkTi=E3GOVA+eZEp88T7fhYUWrfJZvg0DA_5qzyBpF@mail.gmail.com
обсуждение исходный текст
Ответ на postgres performance tunning  (selvi88 <selvi.dct@gmail.com>)
Список pgsql-performance
On Thu, Dec 16, 2010 at 14:33, selvi88 <selvi.dct@gmail.com> wrote:
>        I have a requirement for running more that 15000 queries per second.
> Can you please tell what all are the postgres parameters needs to be changed
> to achieve this.

You have not told us anything about what sort of queries they are or
you're trying to do. PostgreSQL is not the solution to all database
problems. If all you have is a dual-core machine then other software
can possibly make better use of the available hardware.

First of all, if they're mostly read-only queries, you should use a
caching layer (like memcache) in front of PostgreSQL. And you can use
replication to spread the load across multiple machines (but you will
get some latency until the updates fully propagate to slaves).

If they're write queries, memory databases (like Redis), or disk
databases specifically optimized for writes (like Cassandra) might be
more applicable.

Alternatively, if you can tolerate some latency, use message queuing
middleware like RabbitMQ to queue up a larger batch and send updates
to PostgreSQL in bulk.

As for optimizing PostgreSQL itself, if you have a high connection
churn then you will need connection pooling middleware in front --
such as pgbouncer or pgpool. But avoiding reconnections is a better
idea. Also, use prepared queries to avoid parsing overheads for every
query.

Obviously all of these choices involve tradeoffs and caveats, in terms
of safety, consistency, latency and application complexity.

Regards,
Marti

В списке pgsql-performance по дате отправления:

Предыдущее
От: Nick Matheson
Дата:
Сообщение: Re: Help with bulk read performance
Следующее
От: phb07
Дата:
Сообщение: Re: Auto-clustering?