concurrent connections is worse than serialization?

Поиск
Список
Период
Сортировка
От Wei Weng
Тема concurrent connections is worse than serialization?
Дата
Msg-id 1029271170.16828.7.camel@Monet
обсуждение исходный текст
Ответы Re: concurrent connections is worse than serialization?  (Richard Huxton <dev@archonet.com>)
Список pgsql-sql
I have a testing program that uses 30 concurrent connections
(max_connections = 32 in my postgresql.conf) and each does 100
insertions to a simple table with index.

It took me approximately 2 minutes to finish all of them.

But under the same environment(after "delete From test_table, and vacuum
analyze"), I then queue up all those 30 connections one after another
one (serialize) and it took only 30 seconds to finish.

Why is it that the performance of concurrent connections is worse than
serializing them into one?

I was testing them using our own (proprietary) scripting engine and the
extension library that supports postgresql serializes the queries by
simply locking when a query manipulates a PGconn object and unlocking
when it is done. (And similiarly, it creates a PGconn object on the
stack for each concurrent queries.)

Thanks

-- 
Wei Weng
Network Software Engineer
KenCast Inc.




В списке pgsql-sql по дате отправления:

Предыдущее
От: "Aaron Held"
Дата:
Сообщение: Re: update on a large table
Следующее
От: "Christopher Kings-Lynne"
Дата:
Сообщение: tsearch vs. fulltextindex