Re: Parallel queries for a web-application |performance testing

От: Matthew Wakeling
Тема: Re: Parallel queries for a web-application |performance testing
Дата: ,
Msg-id: alpine.DEB.2.00.1006171032120.2534@aragorn.flymine.org
(см: обсуждение, исходный текст)
Ответ на: Parallel queries for a web-application |performance testing  (Balkrishna Sharma)
Ответы: Re: Parallel queries for a web-application |performance testing  ("Pierre C")
Список: pgsql-performance

Скрыть дерево обсуждения

Parallel queries for a web-application |performance testing  (Balkrishna Sharma, )
 Re: Parallel queries for a web-application |performance testing  ("Kevin Grittner", )
 Re: Parallel queries for a web-application |performance testing  (Dimitri Fontaine, )
 Re: Parallel queries for a web-application |performance testing  (Matthew Wakeling, )
  Re: Parallel queries for a web-application |performance testing  ("Pierre C", )
   Re: Parallel queries for a web-application |performance testing  (Dimitri Fontaine, )

On Wed, 16 Jun 2010, Balkrishna Sharma wrote:
> Hello,I will have a web application having postgres 8.4+ as backend. At
> any given time, there will be max of 1000 parallel web-users interacting
> with the database (read/write)I wish to do performance testing of 1000
> simultaneous read/write to the database.

When you set up a server that has high throughput requirements, the last
thing you want to do is use it in a manner that cripples its throughput.
Don't try and have 1000 parallel Postgres backends - it will process those
queries slower than the optimal setup. You should aim to have
approximately ((2 * cpu core count) + effective spindle count) number of
backends, as that is the point at which throughput is the greatest. You
can use pgbouncer to achieve this.

> I can do a simple unix script on the postgres server and have parallel
> updates fired for example with an ampersand at the end. Example:

> echo '\timing \\update "DAPP".emp_data set f1 = 123where emp_id =0;' |
> "psql" test1 postgres|grep "Time:"|cut -d' ' -f2- >>
> "/home/user/Documents/temp/logs/$NUM.txt" &pid1=$!  echo '\timing
> \\update "DAPP".emp_data set f1 = 123 where emp_id =2;' | "psql" test1
> postgres|grep "Time:"|cut -d' ' -f2- >>
> "/home/user/Documents/temp/logs/$NUM.txt" &pid2=$!  echo '\timing
> \\update "DAPP".emp_data set f1 = 123 where emp_id =4;' | "psql" test1
> postgres|grep "Time:"|cut -d' ' -f2- >>
> "/home/user/Documents/temp/logs/$NUM.txt" &pid3=$!  .............

Don't do that. The overhead of starting up an echo, a psql, and a grep
will limit the rate at which these queries can be fired at Postgres, and
consume quite a lot of CPU. Use a proper benchmarking tool, possibly on a
different server.

Also, you should be using a different username to "postgres" - that one is
kind of reserved for superuser operations.

Matthew

--
 People who love sausages, respect the law, and work with IT standards
 shouldn't watch any of them being made.  -- Peter Gutmann


В списке pgsql-performance по дате сообщения:

От: "Kaufhold, Christian (LFD)"
Дата:
Сообщение: Re: Query slow after analyse on postgresql 8.2
От: Josh Berkus
Дата:
Сообщение: Re: PostgreSQL as a local in-memory cache