Обсуждение: How to Speed up Insert from Multiple Connections

Поиск
Список
Период
Сортировка

How to Speed up Insert from Multiple Connections

От
Adarsh Sharma
Дата:
Dear all,

I research a lot on Postgresql Performance Tuning and find some
parameters to increase the select performance in postgresql.
By increasing shared_buffers,effective_cache_size ,work_mem,
maintainance etc , we can achieve performance in select queries.

But In my application about 200 connections are made to DB server and
insert into 2 tables occured.
And it takes more than hours to complete.

I understand the variable checkpoint_segments & want to know is there
any more ways to increase the write performance.


Thanks

Re: How to Speed up Insert from Multiple Connections

От
"Kevin Grittner"
Дата:
Adarsh Sharma <adarsh.sharma@orkash.com> wrote:

> By increasing shared_buffers,effective_cache_size ,work_mem,
> maintainance etc , we can achieve performance in select queries.
>
> But In my application about 200 connections are made to DB server
> and insert into 2 tables occured.
> And it takes more than hours to complete.

Unless you have 100 cores, 200 connections is counter-productive.
You should probably be looking at a connection pooler which can
route the requests of that many clients through a connection pooled
limited to a number of database connections somewhere between two
and three times the number of actual cores.  Both throughput and
response time will probably improve dramatically.

The other thing is that for good performance with writes, you should
be using a hardware RAID controller with battery-backed cache,
configured fro write-back.  You should also be trying to group many
writes into a single database transaction where that is feasible,
particularly when the writes are related in such a way that you
wouldn't want to see some of them in the database without others.

> I understand the variable checkpoint_segments & want to know is
> there any more ways to increase the write performance.

One obvious omission from your list is wal_buffers, which should
almost always be set to 16MB.  If you can afford to lose some
transactions after an apparently successful commit, you could look
at turning off synchronous_commit.

If you don't mind losing the entire database on a crash, there are
lots of other settings you could use, which is collectively often
referred to as "DBAs running with scissors."  Most people don't want
to do that, but there are some cases where it makes sense: if there
are redundant databases, the database is easily rebuilt from other
sources, or the data is just not that important.

I'm afraid that to get more detailed advice, you would need to
provide more details about the problem.

http://wiki.postgresql.org/wiki/SlowQueryQuestions

-Kevin