Обсуждение: Connections per second?

Поиск
Список
Период
Сортировка

Connections per second?

От
Alejandro Fernandez
Дата:
Hi,

I'm writing a small but must-be-fast cgi program that for each hit it gets, it reads an indexed table in a postgres
databaseand writes a log to a file based on the result. Any idea how many hits a second I can get to before things
startcrashing, or queuing up too much, etc? And will postgres be one of the first to fall? Do any of you think it can
handle2000 hits a second (what I think I could get at peak times) - and what would it need to do so? Persistent
connections?Are there any examples or old threads on writing a similar program in C with libpq? 

Thanks,

Ale

--
Alejandro Fernandez
Electronic Group Interactive
--+34-65-232-8086--

Re: Connections per second?

От
Doug McNaught
Дата:
Alejandro Fernandez <ale@e-group.org> writes:

> Hi,
>
> I'm writing a small but must-be-fast cgi program that for each hit
> it gets, it reads an indexed table in a postgres database and writes
> a log to a file based on the result. Any idea how many hits a second
> I can get to before things start crashing, or queuing up too much,
> etc? And will postgres be one of the first to fall? Do any of you
> think it can handle 2000 hits a second (what I think I could get at
> peak times) - and what would it need to do so? Persistent
> connections? Are there any examples or old threads on writing a
> similar program in C with libpq?

Doing it as CGI is going to have two big performance penalties:

1) Kernel and system overhead for starting of a new process per hit,
   plus interpreter startup if you're using a scripting language
2) Overhead in Postgres for creating a database connection from scratch

Doing it in C will only eliminate the interpreter startup.

You really want a non-CGI solution (such as mod_perl) and you really
want persistent connections (Apache::DBI is one solution that works
with mod_perl).  Java servlets with a connection pooling library would
also work.

-Doug

Re: Connections per second?

От
Oleg Bartunov
Дата:
Try

http://www.sai.msu.su/~megera/postgres/pg-bench.pl
(change dbname, first).

Here is data for my notebook (IBM ThinkPad T21, 256 MB RAM, Postgresql 7.2.1)

Testing empty loop speed ...
100000 iterations in 0.1 cpu+sys seconds (833333 per sec)

Testing connect/disconnect speed ...
2000 connections in 2.6 cpu+sys seconds (754 per sec)

Testing CREATE/DROP TABLE speed ...
1000 files in 0.7 cpu+sys seconds (1369 per sec)

Testing INSERT speed ...
500 rows in 0.2 cpu+sys seconds (2272 per sec)

Testing UPDATE speed ...
500 rows in 0.2 cpu+sys seconds (2272 per sec)

Testing SELECT speed ...
100 single rows in 0.1 cpu+sys seconds (1428.6 per sec)

Testing SELECT speed (multiple rows) ...
100 times 100 rows in 0.1 cpu+sys seconds (714.3 per sec)

I'd recommend to use persistent connection for real-life web applications.


    Oleg
On Tue, 23 Apr 2002, Alejandro Fernandez wrote:

> Hi,
>
> I'm writing a small but must-be-fast cgi program that for each hit it gets, it reads an indexed table in a postgres
databaseand writes a log to a file based on the result. Any idea how many hits a second I can get to before things
startcrashing, or queuing up too much, etc? And will postgres be one of the first to fall? Do any of you think it can
handle2000 hits a second (what I think I could get at peak times) - and what would it need to do so? Persistent
connections?Are there any examples or old threads on writing a similar program in C with libpq? 
>
> Thanks,
>
> Ale
>
>

    Regards,
        Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83


Re: Connections per second?

От
"ARP"
Дата:
I think it's simply impossible to have a persistent connection with CGI since the program is called and exited for each
HTTPrequest (or am I wrong ?). 
The only way to do that is either to develop an Apache module (sounds like reinventing the wheel to me), or using
mod_perlor mod_php and the simple "ready to use" interfaces they provide. 
In fact, it depends on how heavy your "must be fast" program will be to decide wether making it work as a CGI will
introducea big overhead relatively to the execution time or not. The longer the execution time will be, the more the
CGIway will tend not to reduce performance. 
That's my point of view, hope it helps
Arnaud


----- Original Message -----
From: "Alejandro Fernandez" <ale@e-group.org>
To: <pgsql-general@postgresql.org>
Sent: Tuesday, April 23, 2002 5:12 PM
Subject: [GENERAL] Connections per second?


Hi,

I'm writing a small but must-be-fast cgi program that for each hit it gets, it reads an indexed table in a postgres
databaseand writes a log to a file based on the result. Any idea how many hits a second I can get to before things
startcrashing, or queuing up too much, etc? And will postgres be one of the first to fall? Do any of you think it can
handle2000 hits a second (what I think I could get at peak times) - and what would it need to do so? Persistent
connections?Are there any examples or old threads on writing a similar program in C with libpq? 

Thanks,

Ale

--
Alejandro Fernandez
Electronic Group Interactive
--+34-65-232-8086--

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly